Monday, August 31, 2015

by Michael DorfA recent article in The Guardian called my attention to a grotesque law review article that appeared in the National Security Law Journal (NSLJ), a student-edited journal at George Mason University School of Law. The article by William Bradford, an assistant professor in the Department of Law at West Point (and formerly a law faculty member at the University of Indiana), is a 180-page McCarthyite screed against foreign and domestic enemies--including civil rights attorneys, the U.S. Supreme Court, the Obama Administration, and especially the legal academy--for their ostensible support for Islamist enemies in the long war in which the U.S. is engaged.

I use the term "McCarthyite" literally. Although much of Bradford's article offers a reading of the law of war at odds with the reading of those whom he criticizes--which is fair enough--his tone is far from academic. He labels those with whom he disagrees cowards, anti-Americans, and fifth-columnists. More directly to Bradford's McCarthyism, he urges that scholars who cast doubt on the legality of U.S. detention, targeting, and other military practices be required to take loyalty oaths, stripped of tenure and lose their jobs, called before "a renewed version of the House Un-American Activities Committee," prosecuted for giving material support for terrorism and treason, and subject to military treatment as unlawful enemy combatants.

That last proposal entails the use of military force, presumably including bombing. Bradford writes: "Shocking and extreme as this option might seem, [these] scholars, and the law schools that employ them, are--at least in theory--targetable so long as attacks are proportional, distinguish noncombatants from combatants, employ nonprohibited weapons, and contribute to the defeat of Islamism." On second thought, to label Bradford's article "McCarthyite" is unfair to the late Senator Joseph McCarthy, who never proposed anything like bombing U.S. universities.

Bradford's article is absurd and, to their credit, the student-editors of NSLJ published a response by Jeremy Rabkin, a respected (former longtime Cornell, now George Mason) conservative scholar. Rabkin rightly pulls no punches in describing Bradford's article as deranged. Rabkin concludes his response by urging the NSLJ editors to acknowledge that they made a mistake in publishing the Bradford article and then to implement steps to prevent further lapses in the future.

The NSLJ did indeed acknowledge that publishing the Bradford article was a mistake and promised a review of its article selection processes. I could quibble with the characterization of the selection of the article as merely a mistake. It was a mistake in the way that politicians issuing non-apology apologies say that "mistakes were made" or that politicians and celebrities excuse their own deliberately bad, even criminal, conduct as a mistake. But this would be a quibble. Professor Rabkin asked for the acknowledgment of a mistake and so the NSLJ editors obliged in those terms. Moreover, it is clear that the current editorial board did not decide to publish the Bradford article. Characterizing their predecessors' grossly incompetent judgment as merely mistaken is perhaps a way of avoiding piling on.

The NSLJ acknowledgment of its mistake goes on: "We cannot 'unpublish' [the Bradford article], of course, but we can and do acknowledge that the article was not presentable for publication when we published it, and that we therefore repudiate it with sincere apologies to our readers." And yet the NSLJ did unpublish the article, after a fashion. On the webpage that lists the contents of Volume 3, Issue 2, links are provided for all of the articles and essays except for the Bradford article. At the same time, clicking on the link on that same page to download the entire issue produces a file that does contain the Bradford article. It can also be found in print and on subscription databases like Westlaw, Lexis, and HeinOnline.

But I recommend that interested readers get their copy from SSRN, where Bradford has (one has to assume inadvertently) uploaded a near-final draft that includes marginal comments back and forth with the student-editors. They are revealing in at least two respects.

First, it is stunning to see the student-editors focusing on minutiae such as whether a source supports a statistical claim that Bradford makes, while almost completely overlooking the outrageous substance of his article. I have sometimes noted some of the advantages of student-edited journals over peer reviewed journals (e.g., point 3 here), but the tendency to miss the forest for the trees is a clear disadvantage, displayed disastrously in the editing of the Bradford article.

Second, in the final printed version, Bradford only says by implication that the U.S. military should be able to kill law professors with views he believes to be in error. The final version says that these scholars "can be targeted at any time and place and captured and detained until termination of hostilities." The rest of the paragraph makes clear that such targeting includes "attacks" with "nonprohibited weapons," but a careless reader might miss the implication that Bradford is advocating killing dissident legal scholars. His original draft was unambiguous. There he apparently wrote that such scholars "can be targeted and killed at any time and place" (emphasis added). A student-editor asked in the margin whether it was "okay to delete 'killed'?" In a rare display of moderation, apparently Bradford was content to make the point only by strong implication. But the draft underscores his clear intent.

Friday, August 28, 2015

by Michael Dorf
As I noted a week ago, last weekend I spoke on a plenary panel at the American Sociological Association meeting in Chicago. Here I'll give a brief report in the style of a What I Did Last Summer essay that an elementary student might write for the beginning of the term. As the title of this post suggests, I'll connect it to a broader issue in constitutional law.

The first thing I'll note is that the conference was enormous, both in terms of the number of attendees and the number of sessions. I was told there were over 6,000 attendees. I haven't checked exact figures, but that feels an order of magnitude larger than the Association of American Law Schools (AALS) annual meeting, which is my own point of reference for a large conference. I suppose that makes sense. There are just over 200 ABA-accredited law schools in the U.S. but there are nearly 3,000 four-year colleges (and about half that many two-year colleges), most of them with sociology departments. So, upon reflection, it's not surprising that the conference is very large.

I was at first surprised that the conference organizers were able to schedule such a large conference with hundreds of panels on "Sexualities in the Social World," as the conference was themed. Were there really that many sociologists whose work focuses on sex? However, as the introductory material to the conference pointed out, sexuality--broadly defined--touches on virtually every aspect of life, including law, religion, education, mass media, military conflicts, and much more. For example, a panel on sexuality in the work place featured the work of three scholars, one studying how gender norms affect African American professional men, another looking at how women fare in STEM fields, and a third who reported on the challenges facing LGBTQ K-12 public school teachers. And that was just one of over 300 scheduled sessions.

I could write many blog posts about the work being done by some of the people I met at the ASA conference, but here I simply want to highlight one small piece of the framing. The very first line of the introductory materials for the conference begins: "Sex usually occurs in private and is seen as deeply personal, yet it is also profoundly social." That's right of course, and it connects to a long-recognized oddity of the Supreme Court's jurisprudence involving sexuality. For a time, anyway, that jurisprudence was framed as a right to "privacy."

Partly this is a matter of historical accident. The leading modern case protecting sexuality is Griswold v. Connecticut, in which Justice Douglas rooted the right of married couples to use contraception in "notions of privacy surrounding the marriage relationship." Griswold's reliance on privacyis sometimes criticized on the ground that no one was prosecuted for using contraception in a marital bedroom in the particular case; it was a test case brought by Planned Parenthood officials who were charged as accessories for distributing contraceptives. This sort of criticism is overstated. As Professor Colb has explained, Griswold itself can really be defended as involving Fourth Amendment privacy. It does not follow, however, that all of the cases building on Griswold are best conceptualized as privacy cases.

Partly in response to the fact that the Fourteenth Amendment does not include the word "privacy," about 25 years ago the Court began shifting the nomenclature of the rights formerly recognized under the rubric of privacy. For example, in Obergefell v. Hodges, the majority opinion only uses the "right of privacy" phrase once, and then embedded in a quote. The dissents use the term in quotation marks to indicate disapproval.

In Obergefell, as in other opinions written by Justice Kennedy, the term “liberty” plays the role formerly played by “privacy.” “Liberty” has the advantage of appearing in the text of the Fourteenth Amendment and, in addition, it captures the greater breadth of interests at stake. Whereas married couples subject to the contraceptive use prohibition in Griswold really were at risk of suffering harm to marital privacy, the right to marry itself is mostly about public aspects of marriage—both concrete benefits such as inheritance and insurance eligibility as well as the intangible benefit of being able to hold oneself out as married. Justice Kennedy’s use of the term “dignity” can be understood as referring to these intangible benefits. Although some critics (such as Justice Thomas in his Obergefell dissent) are no happier about “dignity” than they are about “privacy,” it better captures some of the aspects of the reasons for protecting a right to marry.

One could imagine a line of doctrine specifically protecting a right to dignity and then expounding on its implications in particular cases. Indeed, case law in other constitutional democracies does just that. However, for Justice Kennedy (and thus the Court) dignity is not the substantive right itself, so much as it is an interest that counts as a reason for protecting particular aspects of liberty.

Yet “liberty” itself is too broad a term. Critics have a point when they say that just about anything that anyone wants to do could count as an exercise of “liberty.” Indeed, that’s why libertarians (like Randy Barnett) couch their account of constitutional rights as presumptively protecting liberty. But what looks to Barnett and other libertarians like a virtue of shifting to the language of liberty looks to most constitutional scholars and judges like a vice. If all infringements on liberty are going to trigger heightened judicial scrutiny, then we really will be back in the Lochner era—as the libertarians want and the rest of us fear.

Accordingly, I understand that Justice Kennedy now uses “liberty” as a kind of term of art to mean more or less what used to be meant by “privacy.” Sometimes one sees the word “autonomy” in the case law and academic literature, but to my mind autonomy is no more specific than liberty, and thus has the same defects, while lacking the virtue of liberty’s connection to the constitutional text. I think that the doctrine would be cleaner if instead it were reformulated in more or less the following way:

What was once recognized as a right of “privacy” is more properly understood to encompass a number of fundamental interests, including: privacy from government snooping about one’s intimate affairs—sexual or otherwise--absent a very good reason (as in Griswold); the interest in forming and maintaining close personal relationships (encompassing not just marriage but the child-rearing cases); sexuality understood as sexual activity (generally undertaken in private but protected for reasons that go beyond preventing the government from acting as a peeping tom); and sexuality understood as identity, although much of this work could alternatively be delegated to notions of equality.

I’m not enough of a legal formalist to think that very much turns on what labels the courts use to group categories of cases. The Justices who oppose a right to same-sex marriage or a right to gay sex would continue to oppose these rights regardless of what the majority Justices called them. However, so long as the Court has moved away from the somewhat misleading term “privacy,” it may as well adopt more accurate terminology.

Thursday, August 27, 2015

The first several paragraphs of my most recent Dorf on Law post made clear (once again) that my general attitude toward what passes for modern economics might best be described as poorly contained contempt. The title of the post itself -- "Why Am I Defending Economists -- Especially THESE Economists?" -- expressed my discomfort with the idea that I was taking the side of some prominent economists who had recently been wrongly criticized for being politically naive.

That two of those economists -- Martin Feldstein and Greg Mankiw -- are among the economists whose views I generally find least credible (and often ridiculous) made it ever so much worse. Fortunately, an op-ed that was published in The New York Times that same day reminded me of the fundamental reason why economics (as it is currently practiced nearly everywhere) is so damaging. I am back in my comfort zone.

In "The Case for Teaching Ignorance," an author named Jamie Holmes describes how scientists overstate how much they know and understate how much they do not know. Focusing mostly on medical science, the op-ed noted that students can come out of science courses believing, for example, "that we understand nearly everything about the brain." The author points out that this can deaden students to the thrill of intellectual inquiry, because it makes them think that the point of learning is to absorb existing knowledge, rather than to become aware of the limits of knowledge, which is the only way they will become excited about trying to answer interesting and important questions. In the author's words, "focusing on uncertainty can foster latent curiosity, while emphasizing clarity can convey a warped understanding of knowledge."

Interestingly, the op-ed opens with a story about a surgery professor who wanted to teach a class called "Introduction to Medical and Other Ignorance." The professor was ultimately able to teach the class, but it was evidently a struggle to have it approved. The background assumption against which she was operating was that students need to told what we know (and that we know a lot), rather than being let in on the dirty secret about how little we actually know in many areas of inquiry.

As a graduate student in economics, I frequently taught the Principles of Economics course (which some Dorf on Law readers will know as Ec10). The lead professor in that course was none other than Martin Feldstein, who gave the opening lecture or two of each semester, before turning over the actual teaching of the course to graduate students like me. In those lectures, Feldstein did everything possible to convince our students that economists know a lot of truths about the world, and especially that we now know that some foolish things we believed in the past are not true. Students could thus confidently absorb what he was about to tell them as the established truth about economics. He then offered a series of highly dubious claims that supported conservative policy views.

In some ways, that conservative slant (and the insistent pose that he was not being at all political) was the less disturbing aspect of Feldsteins' performance (which was repeated annually in front of about 800-900 students). I found myself much more annoyed by the pretense of scientific certitude. Per Feldstein, economics is a science that accumulates knowledge and never retraces its steps or moves in different directions. How could it, when there is one truth, and we are moving directly toward it? The NYT op-ed captures the problem with this attitude, noting that "many scientific facts simply aren’t solid and immutable, but are instead
destined to be vigorously challenged and revised by successive
generations. Discovery is not the neat and linear process many students
imagine."

Admitting as much, however, would undermine the political agenda for someone like Feldstein. Interestingly, but not surprisingly, it was Mankiw who eventually took over teaching Ec10 from Feldstein. Mankiw's conservative political slant has been so extreme that he has been the subject of protests from students, who are begging for some balance in the class. But for Mankiw, and Feldstein before him, such protests are silly, because students are simply supposed to accept that the course offers them the opportunity to absorb What We Know about economics.

Such an attitude is hardly confined to two of Harvard's leading conservative economists. (Harvard's reputation for liberalism aside, the Economics Department also houses Robert Barro and Alberto Alesina. And the History Department is home to Niall Ferguson.) In my last gig in an economics department, I was approached by an older professor who was involved in a project to teach economics in high schools. He was developing a model curriculum for use nationwide. What was he teaching them? "I just want them to know the things that all economists know are true. Minimum wages are bad. Money growth always causes inflation. You know, the scientific facts."

The larger problem here is not just that economists think that they know a lot more than they know. It is that they -- much more than the surgery professor who wanted to teach a course about ignorance -- find it of surpassing importance for the world to believe that economics is a science. If even "real doctors" encounter hostility to the idea that they should acknowledge where their realms of knowledge end, it is easy to see why pseudo-scientists like economists insist on presenting their field as a "neat and linear process," lest their views be "vigorously challenged and revised by successive generations." It is too terrifying even to contemplate admitting the truth.

There are, of course, instances in which conservatives will admit that we do not know things. A talking point has emerged among conservative economists, for example, that there are simply no good economic theories to explain how to deal with the aftermath of the Great Recession, in both the U.S. and Europe. This is what drives Paul Krugman crazy, and understandably so, because that move simply ignores the excellent track record of even the simplest Keynesian economic model in explaining persistent sluggishness, the failure of inflation to emerge even in the aftermath of massive monetary stimulus, and low interest rates in a world with relatively high government deficits. "Well, no one really knows nuthin', anyway," is thus a useful dodge when the conservatives' supposedly True Scientific Knowledge fails.

In a Dorf on Law post last year, I offered a different reason that scholars might resist admitting ambiguity. A biologist who specializes in evolution and climate science had contacted me, describing how he had tried to teach a course at his university which would allow students to explore the boundaries of what we know about evolution. Even the scientists who fully understood the pedagogical value of such a course resisted having it taught, because the people who want to pretend that "the science is still out" on climate change and evolution would surely grab onto any news that a Real Scientist was admitting that those theories are incomplete.

The difference between real science and modern economics is not that the former possesses unchallengeable truths, while the latter is unmitigated mush. The problem is that economists are so deeply invested in the scientific dodge. (An MIT economist, defending her conservative conclusions on education policy, once told an interviewer that she was not concerned with politics, because "I'm a scientist.") More than almost any other field, economists cannot admit that their worldview is unscientific -- even though they could do so (as Krugman does) and still at least have the opportunity to show that their non-science can contribute to the policy debate. For far too many, their professional self-image is too fragile to allow them admit the truth.

Wednesday, August 26, 2015

by Michael Dorf
In the aftermath of NFIB v. Sebeliusvarious commentators (including me) noted that during the period before the case was decided, liberals tended to dismiss as preposterous the arguments that conservatives made for the proposition that the federal government cannot use its Commerce Power to regulate the multi-billion-dollar health insurance industry via a purchase mandate. We liberals didn't take those arguments seriously because we didn't share the conservatives' underlying values and, not sharing them, we under-estimated how much the arguments would appeal to judges and Justices who do share those values.

In my latest Verdict column, I warn that something like that is at least within the realm of thinkability with respect to birthright citizenship. The leading precedent, U.S. v. Wong Kim Ark, makes it very difficult to argue that children born in the U.S. to undocumented immigrants aren't citizens but the question is technically still open under SCOTUS precedents. In a Facebook post last week (not public, as is the nature of FB), Harvard Law Professor Mark Tushnet raised the possibility that we liberals could be making the same mistake of thinking that our reading of the precedents is obvious because we do not share the anti-immigrant sentiment of the Trump-led anti-immigration right. My column adds in the possibility that an anti-immigration Republican president could appoint a few sympathetic Justices.

To be sure, even doing my best to account for my own policy disagreement with the anti-immigration position, I think that the argument for denying birthright citizenship to the children of undocumented immigrants is weak, but then, it's always hard to be sure that one is accounting for one's own biases. In any event, even if we assume that children born in the U.S. to undocumented immigrants are entitled to birthright citizenship absent a constitutional amendment, it is worth responding to the substantive policy argument made by immigration foes. And in order to respond effectively to the argument, it is useful to have a sense of what's driving it.

The anti-immigration crowd's chief stated argument against birthright citizenship is that it leads to what they call "anchor babies"--a term that is widely regarded as offensive. The claim is that undocumented immigrants come to the U.S. to give birth, so that their children will be U.S. citizens and thus "anchor" their claims to stay. As explained in a Washington Post article last week, the claim is surely wrong: Having a U.S. citizen child does not confer any right to stay in the country--although the enjoined Obama Adminsitration program would have created the possibility of temporary deferred action (but not legal status) for undocumented immigrant parents of U.S. citizens (as discussed on DoL by Professor Kalhan here).

In my view, however, the fear of "anchor babies" as incentive is a post-hoc effort to come up with a seemingly rational policy concern. The underlying sentiment is more visceral--and Trump's outrageous claims about Mexico "sending" rapists and murderers taps into its core. It may be helpful to understand the real concern by reference to a Clint Eastwood movie.

Directing and starring in the gripping but disturbing 2008 film Gran Torino, Eastwood plays Walt Kowalski, a bitter widower who remains in his Detroit neighborhood long after the other white people--including his grown sons and their families--have left. Walt is a type that only Eastwood could play: a late-70s (at the time) racist action hero with a heart of gold. He uses multiple racial slurs to describe his Hmong immigrant neighbors. (Partial spoiler alert!) The action centers around Walt's relationship with teenager Thao (played by Bee Vang). Under pressure, Thao reluctantly joins a local gang and must steal Walt's Gran Torino as his initiation. He botches the job and then ends up working for Walt as penance. They eventually become close and Walt--as a kind of aging Dirty Harry--takes on the now-estranged gang to defend Thao and his family.

The story is partly redemptive. We come to see Walt's racist language as superficial. His only friend is a barber of Irish descent, with whom Walt trades ethnic insults, so we are led to think that racism is simply a mask that Walt wears to hide his unexpressable feelings. Likewise, Walt's prejudice against the Hmong--whom he sometimes conflates with the North Koreans and Chinese he fought fifty years earlier--is only superficial. Walt's real disaffection is with the young. He comes to respect his adult Hmong neighbors, but with the exception of Thao and his sister Sue (played by Ahney Her), Walt despises the younger generation. The story's villains are second generation Hmong-Americans--the "anchor babies" their parents would have had if they had been undocumented.

But Walt has no greater respect for white American youth. Early in the film he rescues Sue and a white teenager from a confrontation with three African American teenagers but then condemns the white teenager as a fool or worse. Walt also has contempt for his own grandchildren, whom he regards as lazy and disrespectful. Walt's basic attitude--which he literally states several times in the film--is the bitter old man anthem "get off my property."

To me, that is the underlying meaning of the attempt to eliminate birthright citizenship. Yes, it focuses on immigrants--the angry Americans want to keep them off our collective property--but at bottom this is the cri de coeur of the aging white demographic, upset at least as much by their own grandchildren, with their hip-hop music and their support for same-sex marriage, as they are with the children of undocumented immigrants.

Eastwood's own political views are certainly conservative but complex. During his bizarre performance at the 2012 RNC, Eastwood's chief criticisms of Obama/empty chair were that he didn't do enough to bring down unemployment and that he was naive in thinking the war in Afghanistan was winnable given the Soviet experience. These are not the complaints of a conventional right-winger. Moreover, like all great art, Gran Torino cannot be reduced to a linear message or moral. Nonetheless, Gran Torino does seem to be a morality play, even if an unconventional one. The film plainly treatsWalt Kowalski as a complicated but ultimately sympathetic hero. Walt believes in real virtues, like loyalty, personal responsibility, respect, courage, and, most of all, retributive justice. We can acknowledge that these are virtues without endorsing Walt's world view, his dangerous nostalgia, his willingness to write off an entire generation, or his blatant racism. We can understand his motivation as not entirely bad without remotely agreeing with his stated views.

So too with the people who would like to eliminate birthright citizenship for U.S.-born children of undocumented immigrants: Their rage may well be misdirected anger that begins in something not entirely ignoble; but they should be opposed nonetheless.

Tuesday, August 25, 2015

Frequent readers of Dorf on Law have seen ample evidence that I am hostile to the economics profession as it is currently constituted. Although I often find myself in agreement with those on the left side of the current divide among economists, I have made clear my discomfort with the norms (both intellectual and professional) of the field as practiced in almost all economics departments -- in the U.S. and around the world. I am certainly a "dissenting economist."

On the policy front, I have critiqued economists who advise both Republicans and Democrats, though not in equal measure. Moreover, I spent quite a bit of time two years ago describing how "orthodox left" economists such as Paul Krugman end up (perhaps inadvertently, but still quite consistently) maintaining the professional status quo by siding with conservative economists against "heterodox left" economists. (My final post on that subject can be found here, with a link in the first paragraph that leads to previous posts in that series.)

There is no doubt, therefore, that I find much to criticize in the world of credentialed economists. Even so, just because they are guilty of so much does not mean that they are guilty of everything. I thus found myself quite annoyed a few weeks ago, when a guest columnist in The Washington Post blithely offered some of the most baseless attacks on economists that I have seen in some time. The column, "This is what economists don’t understand about the euro crisis – or the U.S. dollar," was written by a prominent political scientist whose record certainly suggests that she possesses an impressive knowledge of European politics. Even so, the author's argument ultimately boils down to something like this: "There are economists with whom I disagree, and they are wrong because they only think about economists and not politics, which is what I know."

In the opening sentence of the piece, prominent U.S. economists are accused of almost enjoying the Euro/Greek crisis. They are, rather amazingly, said to be offering critiques with "more than a hint of schadenfreude." In an attempt to be bipartisan, the author then slams Greg Mankiw (conservative), Paul Krugman (liberal), and Martin Feldstein (conservative) for being variously "smarmy," "relentlessly excoriating," and "condescending." What is notable, however, is that the author never actually argues that these economists are wrong that the crisis is (in Feldstein's words) "the inevitable consequence of imposing a single currency on a very heterogeneous group of countries." Instead, "[w]hat this commentary gets wrong, however, is that single currencies are
never the product of debates about optimal economic solutions."

This does not even rise to the level of a cheap shot. If ever there were three economists who cannot be accused of political naivete or ignorance, it is those three. Yes, Mankiw likes to write dumbed-down pieces for The New York Times in which he acts as if (an extreme conservative version of) Econ 101 is really all one needs to know to run the world. In fact, I have a folder on my hard drive called "Mankiw Follies," in which I keep a running list of such nonsense. My dearest hope is that I will never have enough time in my schedule to go back and read all of them, much less to write the article forming in my mind that would explain their aggregated madness.

But the arguments to which Mankiw, Krugman, and Feldstein refer are not international monetary versions of "assume a can opener." The argument was never about "optimal economic conditions" but about the very predictable results of adopting a currency union when both economic and political conditions were far from optimal. After a long -- Dare I say smarmy and condescending? -- summary of how the U.S. achieved a common currency, the columnist finally asserts that economists do not understand "a broader reality": "[M]oney has always and everywhere been part of broader projects of
political consolidation. This means that it has always been highly
contentious." The hell you say!

Finally, we get to the real argument, such as it is: "European leaders weren’t stupid or self indulgent when they decided to
move ahead with the euro, without fiscal union or strong Europe-level
democracy. They just cared more about politics and international
security than economics." What the columnist should have understood is that Krugman et al. are saying something like this (with which I obviously agree): "European leaders were stupid and self indulgent when they decided to
move ahead with the euro, without fiscal union or strong Europe-level
democracy, because they just cared more about politics and international
security than economics and because they thought that they could wish the economic realities away."

As an analogy, there really are true-believer economists who insist that any attempt to mess with "the invisible hand" will lead to ruin. Minimum wages? Horrors! These economists are wrong, of course, but that does not mean that there are no economic constraints on what one can achieve via increases in the minimum wage. And tarring every criticism of the minimum wage by saying that "it's not stupid and self-indulgent to think that there are more important things than economic efficiency" is simply incoherent. I want to increase the minimum wage, but I know that it would be insane to try to set it at, say, $1000 per hour.

Finally, consider this admission of the level of wishful thinking: "When [European leaders] did think about economics, they hoped that a strong euro,
anchored in an independent European Central Bank located in Frankfurt
and built on a commitment to protecting the stability of the currency,
would help resolve the problems of currency depreciation, spiraling
inflation and economic instability that came with the weak currencies of
the 'Club Med' countries to the south of Europe." I have no doubt that they did so hope. And other people at the time said that those hopes were not based in reality, that the result of moving too fast would be to increase instability and risk undoing all of the many important accomplishments of the project to integrate Europe.

"History does not unfold as a series of neat and sterile decisions made
by people rationally trying to create economically optimal policies." I am not saying that there are no economists who would disagree with this statement. I am saying, however, that even the prominent economists whom I have harshly criticized over the years for being far too insular in their thinking are not that insular. Many economists really are politically ignorant (and arrogantly so). In this case, however, being truly politically savvy should have suggested that it was the European leaders who had "neat and sterile" little stories about how the Eurozone would work.

The Eurozone might stay together, or it might not. The history of U.S. fiscal integration is interesting in its own right, and it suggests that monetary history is not a smooth series of events. But so what? No serious analysis claims otherwise. People often overuse the claim that their opponents have merely built a straw-man argument. In this case, that claim is true. Mankiw surely is smarmy, but the economists who doubted that Europe was ready for the euro were not political naifs. Apparently, European leaders were so convinced of their own political brilliance that they thought that they could make things happen simply because it would be nice if those things could happen. That is "leadership" of the worst kind, and Europeans have paid a steep price for such arrogance. It might get worse.

Monday, August 24, 2015

By Eric SegallJudge Richard Posner has come under heavy fire this week for reversing
the grant of summary judgment against a pro
se prisoner who claimed that doctors at his prison violated his rights by
refusing to prescribe the drug Zantac correctly. Judge Posner’s opinion relies
to some degree on independent research the Judge performed on various websites
including WebMD, the home page of the company that distributes Zantac, the
Physician’s Desk Reference, and the Mayo Clinic. A strong dissenting opinion
argued that it was improper for Judge Posner to go outside the record to send
the case back to the trial court for more factual findings.

The issue of whether it is appropriate for appellate judges
to perform internet research outside the purview of the parties' submissions is hotly
contested and fraught with both practical and philosophical issues concerning
the role of judges and the adversarial process. I am not writing to take a
position on that question generally. The point of this piece is simply that in
this particular case, Judge Posner was right to go outside the record.

The plaintiff is a prisoner without the ability to
hire a lawyer or his own expert witness. He asked for both in the trial court
and the judge denied his motion. The state’s expert witness in the case was the
very prison doctor who allegedly refused to prescribe the plaintiff’s medicine correctly.
His testimony was refuted by virtually all the sources that Judge Posner
consulted and the panel (one judge concurred in the result without addressing
the research issue) simply remanded the case back for a factual determination
and did not conclusively decide the question.

In response the to the dissent’s blistering attacks, Judge
Posner said that a refusal to go outside the record in this case would “fetishize adversary procedure.” This is no
doubt correct. How would a prisoner with few resources go about proving his
case when he is denied access to a lawyer and expert witnesses? Moreover, as
many commentators have pointed out, appellate judges, including or maybe
especially Supreme Court Justices, go outside the record all the time to find
facts that the formal record in the case does not support. Perhaps most
importantly, in this case the court simply sent the case back for further
fact-finding. As Judge Posner wrote:

We are not deeming the Internet evidence cited in this
opinion conclusive or even certifying it as being probably correct, though it
may well be correct since it is drawn from reputable medical websites. We use
it only to underscore the existence of a genuine dispute of material fact
created in the district court proceedings by entirely conventional evidence,
namely [plaintiff] Rowe’s reported pain.

The availability of an enormous amount of reliable internet information
is a phenomenon less than fifteen years old. How judges should use that vast repository is a
difficult question that requires more study and thought. But, in this case,
where the issue was one of correct medical procedure, where that question can be looked
at through examination of numerous respected web sites, and where the adversary
process involves a resource-deprived prisoner against a well-funded state
defendant, it would be the height of formalism to prohibit appellate judges
from consulting any source outside the formal and closed record of the case. At least in
those circumstances, justice and a fair result properly trumped unnecessary legal rigidity.

Friday, August 21, 2015

by Michael Dorf
Tomorrow I will be speaking on a plenary panel in Chicago at the American Sociological Association's annual meeting. The organizing theme for the conference is "Sexualities in the Social World" and my particular panel is titled "The Politics of Same-Sex Marriage: Public Opinion and the Courts." The other panelists are Greg Lewis of Georgia State, Brian Powell of Indiana, Katrina Kimport of UCSF, and panel organizer/moderator Paula England of NYU. As the lone lawyer in this group (and one of only a handful at the conference), it's fair to say that I was asked to give the "Courts" angle.

That's not to say I'm uninterested in the politics of or public opinion regarding same-sex marriage (SSM) or other subjects that intersect with law. Indeed, although I will spend most of my allotted 16 minutes (plus Q&A) discussing the legal road to and from Obergefell v. Hodges, I also plan to insert some theory about the relation between law and social movement actors. Here I'll briefly preview my theoretical remarks. Okay, here goes:

Even in the traditional formalist view, law is a product of social forces. People express preferences through electoral politics, and after some filtering, those preferences then get expressed through legislation. When social forces lead to changes in the electorate's preferences, a reasonably responsive democratic system translates those changes into changes in law. However, in the application of formal legal materials such as statutes and constitutional provisions, judges don't (or at least aren't supposed to) take account of changing social norms and practices. We have many metaphors for the role of the judge in a legal formalist world but the most prominent these days is the one that then-Judge Roberts offered as a nominee for the SCOTUS--an umpire just calling balls and strikes. To be sure, baseball fans know that umpires excercise considerable discretion in calling balls and strikes, with some employing wider or narrower strike zones, but everyone understood that Roberts meant to convey a fairly mechanical view of judging. In formalism, law and politics--including politics as the translation of social change--are separate.

Although formalism continues to have its champions, at least as a goal that judges should strive to achieve, since the advent of legal realism about a century ago, most people who are interested in social movements and the law recognize that social and poltical changes can translate into court results even without new legislation or constitutional amendments. For most legal theorists, however, social movements tend to be something of a black box. For example, Jack Balkin--whose book Living Originalism places social and political movements at the center of his attempted reconciliation of originalism and changes in constitutional understandings--provides virtually no fine-grained examples of how social movement actors influence courts. Other legal scholars attentive to the influence of social change on legal understandings do not provide much more--occasionally referring to judicial appointments. In any event, the basic picture is that there's a social movement and the courts take notice.

Some scholars expressly claim that the influence of social movements on law is a one-way street. Gerald Rosenberg's Hollow Hope is the leading example here. Although Rosenberg does not say much about the influence of social movements on law, he argues that courts do not bring about social change (at least absent help from the political branches). When Rosenberg spoke at Cornell earlier this year, he was gracious in acknowledging that SSM may yet prove to be a counter-example to his thesis, which is ultimately empirical. In any event, my goal here is not to argue with his thesis but simply to note that much writing about social change and law seems to take Rosenberg's view or its opposite as a tacit starting point. The question on which this branch of scholarship focuses is how (or whether) changes in law affect society, not how (or whether) changes in society affect law.

Yet, unless one is entirely persuaded by Rosenberg's thesis, one will recognize that the interaction between, on the one hand, courts and other legal actors, and, on the other hand, social movements, is dynamic. Social movements influence law and law influences social movements. Indeed, often law, or more precisely, a strategy for changing the law, is part of the social movement itself. Some of the best work by non-lawyers on legal campaigns shows how lawsuits, referenda, and lobbying can be part of a strategy of mobilization that builds a movement even when it fails in its immediate goal of attaining legal change.

To my mind, inter-disciplinary work by lawyers in combination with social scientists can make an important contribution by providing a more fine-grained picture of how this dynamic operates. My own modest contribution with respect to SSM--my 2014 study with Sid Tarrow--makes two points regarding the dynamic: First, in some circumstances (including the case of SSM), a counter-movement to a movement to change the status quo may actually place the movement's issue on the public policy agenda, thus leading the movement to champion a cause it might otherwise have neglected, at least for a time. And second, any truly fine-grained account of the relation between social movements and legal change must treat social movements themselves as consisting of movement organizations as well as grass-roots actors. Organization leaders who may be reluctant to seek a certain kind of legal change because of their evaluation of their limited likelihood of success will sometimes be pressured to act in a way that they regard as premature or rash.

Thus, socially conservative organizations first used the prospect of SSM as a wedge issue. Nearly all mainstream politicians took the bait and there matters stood for roughly the decade between the backlash against Baehr v. Lewin (the Hawaii case) and the recognition of a state constitutional right to SSM by the Massachusetts Supreme Judicial Court in Goodridge v. Dep't of Pub. Health in 2003. Meanwhile, at first the LBGTQ rights organizations responded timidly, fearful that aggressive advocacy for a right to SSM would spark a backlash. However, pressure from the grass roots--i.e., same-sex couples who wanted to marry--and the decentralized nature of litigation in the U.S., which enabled people to file lawsuits without the backing of the major organizations, eventually led the LGBTQ rights organizations to embrace and fight for SSM. Meanwhile, we haven't studied the extent to whcih opposition to SSM from the right was driven by grass-roots opposition or whether this was an organization-driven effort to mobilize social conservatives. As I shall be in a room full of people who study such matters, I'll ask whether anyone knows the answer. If no one does, I'll suggest that this would be a fruitful line of inquiry.

Thursday, August 20, 2015

In The Myth of the Cultural Jew, Roberta Kwall, the Raymond P. Niro Professor of Law at DePaul University, has accomplished something quite extraordinary. Applying the lessons of cultural analysis to the question of what it means to be a Jew, Kwall demonstrates unequivocally and in a large number of contexts, that Jewish law—“Halakhah”—whether observed by the most devout “Haredim” (named for the Hebrew word for “trembling”) or the nominally Reform Jews who rarely observe commandments or attend synagogue services—is necessarily and profoundly shaped by the particular human beings who follow that law and who call themselves “Jews.” The content of Jewish law, then, takes in the proclamations of elites as well as the behavior of the masses of Jewish individuals who negotiate their lives embedded in a culture of Jews as well as the non-Jews who surround them. Because people dynamically construct Jewish law, the substance of Judaism and, accordingly, the meaning of “Jewishness” have differed over time and space. To claim that there is one and only one way to be a law-abiding Jew, in the light of the arguments that Kwall marshalls in her book, is to expose oneself as ignorant and in need of the deep enrichment and fascinating story told in The Myth of the Cultural Jew.

When a book has been so expertly crafted, it is difficult to know what to say in response, other than to express gratitude to the author. But while I do express that, I wish to dedicate some space in this review, first, to exploring how Kwall’s claims ring true to my own experience of living as a Jewish person who is not very observant, second, to drawing an analogy (or really, building on analogy that Kwall herself discusses) to the area of constitutional criminal procedure, and third, to quibbling a bit with what I regard as a perhaps secondary claim about the existence of purely cultural Jews.

My Own Life

I have written elsewhere about my own identity as a Jew and the role it has played in my writing. I observe virtually no commandments, save for those incidentally implicated in virtue of my ethical veganism, which places me more or less in compliance with the year-round dietary prohibitions. I avoid both flesh and dairy as well as all other manner of animal products, and much of Jewish dietary law centers on which animals’ flesh may or may not be consumed and in what temporal or physical proximity to dairy secretions. Yet I have not simply ignored Jewish law in this regard.

I have, for example, offered my own interpretation of the Biblical prohibition that most observant Jews today construe as a mandate of separation between flesh and dairy, a prohibition that I argue is best understood as an injunction against disrupting the intimate bond between a mother animal and her baby, one that ultimately points the way to veganism. I have thus engaged with Jewish law and brought my ethical and cultural commitment to non-violence towards animals to bear directly on that engagement. Kwall’s cultural analysis approach serves to validate and illuminate that project.

When my older daughter turned thirteen but did not want what would be recognizable as a Bat Mitzvah, my husband organized a beautiful vegan celebration (that was therefore suitable for people observing conventional Kosher rules) that he dubbed a “Not Mitzvah.” The days before the event included a visit to the United States Holocaust Memorial Museum in Washington, D.C., and the ceremony itself involved both a speech in which my daughter spoke of her grandfather’s (my father’s) heroism in rescuing Jews during the Holocaust and of her own commitment to sparing animals’ lives through veganism, followed by a rendition on her saxophone of a haunting Yiddish melody called Oyfn Pripetchik, the words of which tell of a rabbi teaching little children the Hebrew and Jewish alphabet, “Dem alef-beys,” through which they could learn the Torah (the Jewish written law). Jewish law and culture seamlessly informed our “Not Mitzvah” and made it as special as it was.

Constitutional Criminal Procedure

In her book, Kwall discusses the American constitutional case of Dickerson v. United States. In Dickerson, the United States Supreme Court considered the validity of a congressional statute providing that police who place a suspect in custody are not obligated to provide the now-famous “Miranda warnings.” What made this case difficult was that the Supreme Court had said repeatedly in the years that followed its decision in Miranda v. Arizona (requiring the warnings) that its protections were not required by the Constitution but represented a mere prophylactic measure. If so, it seemed to follow that Miranda was only common law (at best) that could be overruled by a clear statute, such as the one at issue in Dickerson. Yet the Supreme Court held that Miranda was a constitutional decision and that Congress therefore lacked the power to overrule it, notwithstanding the Court’s prior statements, along with numerous and continuing exceptions to Miranda that would appear to be unacceptable if Miranda were truly constitutionally compelled.

Kwall, citing Naomi Mezey, significantly observes that the Supreme Court’s decision rested in part on the fact that the Miranda warnings have become a vital part of American culture and have accordingly acquired the status of “constitutional law” in that way, because Americans view them as such. I find this cultural analysis approach to Dickerson a very satisfying one. And I would add, in the same spirit, that much of what animates the content of the Fourth Amendment right of security against unreasonable searches derives quite directly from cultural practices among non-governmental, non-elite individuals, who may reasonably expect privacy when they talk on the telephone, who would typically allow an objecting occupant’s “no, don’t come in” to take precedence over the welcoming co-occupant’s “yes, please do come in,” and who implicitly license neighbors, peddlers, and solicitors to approach their front doors briefly but do not similarly license a visiting narcotics-trained dog to pace rapidly back and forth at their front doors for several moments.

In all of these cases, the Supreme Court has determined the substance of the law largely from the cultural expectations and conduct of people who are not in charge of construing the Fourth Amendment and who are also not strictly invoking or necessarily complying with the local law of trespass. This is culture making its indelible impression on the law and shaping it dynamically over time. And it is not surprising (but surely edifying) to learn that this happens among those observing Jewish law in much the same way it happens for those subject to U.S. law.

Quibble About the Title

There is so much brilliance in this book, which takes on, among other complex areas, the various denominations of Judaism in the Diaspora, the role of Israel in Jewish identity, the religiosity (or secularism) of Israeli Jews, the place of feminism in Orthodox practice, and the challenges posed by same-sex relationships among the devout, given some of the language in Leviticus. But I must quibble with a proposition that I think may not be an absolutely central claim of Kwall’s book, the proposition that the “cultural Jew” is a myth.

Kwall proves beyond any reasonable doubt that the “Halakhik” or “purely legal” Jew is a myth. Law—including Jewish law—cannot and does not exist in a complete vacuum, though many devout people may imagine that it can and does. That is a major achievement, one made possible by painstaking and thorough research and investigation. Furthermore, this proof has lessons for law more generally that go beyond its particular application to Judaism.

But I believe there can be a cultural Jew, one whose experience of himself as “Jewish” is completely divorced from Halakhah, from Jewish law. As Kwall herself notes, for example, one of the most important indices of connection to Judaism identified by American Jews is remembering the Holocaust. Many people who consider themselves Jewish but observe no commandments (other than those that fully map onto contemporary post-Enlightenment norms) and show no interest in learning about Judaism per se feel Jewish in virtue of having been racially classified as such in the Twentieth Century by the Nazis. They are Jewish, in other words, as an ethnic identity that has formed in direct response to genocidal hatred and racialization.

I have a friend who knows that her grandparent was Jewish and that she therefore would have been vulnerable to the Final Solution had she lived in Europe at the relevant time. But that is it. That is her Jewishness.

Kwall, I suspect, would say that this Jewish identity is weak and shallow and will not survive into the future. And she might well be right about that. Perhaps, without any connection to Jewish law, Jewish culture becomes so impoverished that its preservation is unlikely. Kwall would undoubtedly view this state of affairs with alarm. She states, “if halakhah is what drives Jewish particularity, and ifhalakhah is figuratively embedded in the DNA of the Jewish people, then a failure to actively cultivate an appreciation of its role will inevitably lead to the extinction of the Jewish people.” I understand this alarm. Having grown up as an Orthodox Jew, I myself recall some of my teachers explicitly telling me and my classmates that assimilation—the dilution and eventual elimination of Jewish identity—is tantamount to finishing the job that Hitler started.

But the fact that the purely cultural Jew may not be a robust creature whose children and grandchildren will remain recognizably Jewish does not make the purely cultural Jew an impossibility. Kwall states that “[m]any, if not the majority, of Jews consider themselves ‘culturally Jewish,’ without recognizing that such a label is an impossibility according to cultural analysis. The culture embraces the halakhah and vice versa. One cannot exist without the other.” Further, Kwall argues, “given the inevitable intersection between law and culture, Jewish culture is meaningless absent its grounding in halakhah.”

The fact that the purely cultural Jew may not be long for this world does not, however, make the purely cultural Jew an “impossibility” or a myth. And I would go even further with this quibble. I would say that a racialized Jewish identity—combined with a sense of humor that predictably accompanies a history of persecution—may be more robust and capable of self-replication and preservation than Kwall imagines.

Tribalism is a powerful force, and the mix of blood (in the form of racial identification), land (in the form of nationalism associated with having a state where Jews, broadly defined, are welcome), and the persecution itself from which Jews might be fleeing to that state (paradigmatically exemplified by the Holocaust) could conceivably keep the Jews “going” for a long time as an identifiable group with cultural norms bereft of what I would concede are the richness and beauty contained in the Halakhah.

I say this not as a normative critique of Kwall’s claims. As between tribalism and an enduring tie of some sort to the Halakhah, I would think the latter may well be a healthier and more positive foundation on which to build the future of Jewish identity. It is not, however, the only way, as the persistence of tribalism and racialized identity attest to around the world.

Perhaps I would thus have titled the book differently. “The Myth of the Purely Halakhic Jew,” “The Improbability of the Cultural Jew,” or “The Unconscious Halakhah that Permeates Most Self-Described Cultural Jews.” Kwall’s title is far more elegant, though, and I would, in any event, not focus too much attention on the title of what is a profoundly significant and scholarly contribution to our understanding of what it means to be a Jew.

Wednesday, August 19, 2015

My Verdict column for this week considers a case from the U.S. Court of Appeals for the Sixth Circuit, Huff v. Spaw, in which the court held that a person who inadvertently pocket dials a third party retains no "reasonable expectation of privacy" (under the federal Wiretap Act) from the third party's listening to the person's conversations picked up by the cellphone (and therefore by the third party) for 90 minutes. The court's reason for this aspect of its ruling is that people can protect against the pocket dialing phenomenon and accordingly assume the risk of such disclosure if they fail to take the proper self-protective measures. In my column, I discuss some of the problems inherent in deciding the case in the way that the Sixth Circuit did.

Here I want to consider one downside of coming out the other way and holding a third party to have violated the privacy of the person whose telephone pocket dialed the third party: it asks people to fight the very strong force of their curiosity.

When my younger daughter was an infant over 10 years ago, I had a baby monitor that I used to ensure that she was safe when she was in her room alone for a nap or for a night of (constantly interrupted) sleep. One day, when my daughter was out on the town with her babysitter and her in-the-room monitor was turned off, I suddenly noticed sound coming out of the receiver of the monitor (which was on). I at first wondered what was going on, since my daughter was not home and the monitor therefore could not be broadcasting her. I quickly realized, however, that what I was hearing was the sound of one of my neighbors talking on the telephone with her friend (though I could not hear her friend's voice). I was curious about my neighbor, so I listened for a few minutes. Nothing of note was said, though, and I eventually grew bored and stopped listening.

But what if she had said something relevant to me? What if she had said something about me or some member of my family? Or what if she had simply told a scandalous tale about herself or someone else in our building? I almost certainly would have continued to listen until I had learned everything I wanted to know about how her life intersected with mine and what she thought of my family. Given what a social species humans are, it is hardly surprising that it would have been difficult for me to turn off the monitor if it was providing me with relevant information about my life. According to some, gossip is an evolutionarily hard-wired activity in humans.

Saying this does not, of course, excuse invasions of privacy. Nonetheless, if one of us suddenly becomes privy, without any wrongdoing on our part, to someone else's secret information that may concern us (or that may have value on the "gossip" market), it is a tall order to suggest that we must actively stop the information from coming our way, by either hanging up on a call we did not initiate or by turning off a baby monitor receiver. Most of us can understand the temptation to keep listening. And to say that listening to a pocket dial invades a reasonable expectation of privacy under the Wiretap Act is to say that the recipient of the call is potentially liable in a lawsuit for listening to an uninvited surprise communication that makes its way into the recipient's ear.

At the same time, the fact that listening is so tempting in these situations may be exactly why it should be unlawful. The law need not prohibit us from doing something we have no desire to do, and likewise, the more drawn we are to doing something that invades the privacy of others, the more we arguably ought to be using the law's sanction to deter such behavior. And perhaps more importantly, it may be difficult to tell the difference between an innocent receipt of a pocket dial and a deliberate intrusion on privacy coming from the third party. To the extent that the Wiretap Act prohibits the latter explicitly, it may avoid problems of proof to extend that prohibition to the (relatively unusual) case of the pocket dial that happens to land on a third party whose knowledge of the exposed matters could be harmful to the pocket-dialer.

Tuesday, August 18, 2015

[Update: A reader has provided a link to the letter that I describe in Paragraph 6 of this post: http://www.ijdh.org/wp-content/uploads/2015/07/Letter-for-President-Obama-July-14-2015-2.pdf.]

One of the news stories that has been rattling around in the background over the last few years is a human rights crisis in the Dominican Republic (DR), which was set off by a 2013 ruling of the DR's highest court that Dominicans of Haitian descent -- even those from families who had lived in the DR for generations -- were to be stripped of their citizenship. I recall seeing a few headlines and worrying about what might be happening, but the media's coverage of the situation was sufficiently muted that I had not consciously engaged with any of the details.

As it happens, one of my recent former research assistants, who is now an attorney here in Washington, is a former Peace Corps volunteer who spent two years in the DR before starting law school. He and some other Peace Corps alums have recently been trying to bring the situation in the DR to the attention of U.S. policymakers. Having done some background research on the issues involved, I devoted my new Verdict column to the story. The situation is truly scary.

Because the Dominican Republic is the less poor of the two countries on the island of Hispaniola, ethnic Haitians have migrated to the DR over the decades. The situation has led to a fairly predictable set of social and economic problems, with different skin colors and different languages leading to systematic discrimination against Dominicans of Haitian descent. Still, the DR has been their home, both as a matter of fact and law. In 2013, the court ruling that I noted above set off a completely unnecessary internal crisis. The Inter-American Court of Human Rights ruled against the DR in 2014, finding that the government had engaged in "a pattern of expulsions," including "collective expulsions."

My Verdict column describes some of the details of the situation, noting in particular an important letter that the returned Peace Corps volunteers sent earlier this month to U.S. Secretary of State John Kerry. The DR has predictably responded by saying, in essence, that those do-gooders should keep their noses out of a sovereign country's affairs, and that there is nothing to worry about in any event. In the column, I endorse the idea that the U.S. should respond by saying, "You know what? You're right. We will stay out of your affairs. And we'll take our foreign aid with us on the way out the door."

Before the Peace Corps returnees sent their letter to Secretary Kerry, another group letter was sent to President Obama in July. Written by Florida International University Professor of Law Ediberto Roman, and signed by over 100 professors at American law schools (including my GW colleagues Eleanor Brown, Burlette Carter, and Robert Cottrol), the letter calls on the president to issue a public statement and take some diplomatic steps to stop the crisis before it gets worse.

The DR's embassy in Washington has responded by claiming that this is all a big mistake. The ambassador even sent a letter to Professor Roman, stating that the ambassador wanted to "clarify the scandalous and misleading facts" in the letter that Professor Roman had drafted. (I cannot find that letter on-line, but it is certainly not confidential, and it is being circulated widely.) [Note: See update at the beginning of this post.] The DR government's position is, essentially: "Hey, we all have immigration problems, don't we? But don't worry, because we've put in place a process that allows people to regain citizenship, and we even have some statistics to show you that the process is working." As I explain below, these reassurances are difficult to take seriously.

An article in the PanAm Post on July 1 describes the situation on the ground in the DR. Despite the government's claims that everything is being handled according to the rule of law, there is so much panic among ethnically Haitian Dominicans that many have fled the country, "self-deporting" to prevent themselves from being forcibly removed by Dominican security forces or others.

Two further points merit emphasis here:

First, that PanAm Post article raises the prospect that the DR's procedures for re-establishing citizenship are a sham. A group called Jesuit Service to Migrants, which operates in a border area, claimed that, "[i]n a maneuver to confuse and mislead national and international public
opinion, the Ministry of Internal Affairs has asked the workers of this
office … to open the offices, comply with a work schedule, but not
assist anyone who comes by."

This is an old trick, of course. (I recall a story about a French ruse in the 1980's to reduce imports from Japan by creating what is known politely as a "non-trade barrier." The French government set up a "port of entry" in the mountains in the middle of the country, accessible only by smaller-than-standard delivery trucks, with one desultory customs inspector assigned to process the incoming goods.) The ambassador's claim that 290,000 people have requested processing under the DR government's National Regularization Plan, and that "each applicant" will receive a review -- that is, case-by-case review of required documentation -- by the end of August (less than two weeks from now, and only 40 days from the date of the ambassador's letter) is certainly difficult to believe.

Second, as Professor Roman points out at the end of that PanAm Post article, the DR government should not be allowed to hide behind the notion that this is an "immigration issue" in the first place. We are not talking about people who are showing up and now need to be processed under normal immigration rules. Instead, this whole crisis was set off by the decision to take away the citizenship of some Dominicans on the basis of their ancestry.

The ambassador's letter claims that no deportations have occurred and that no one will be deprived of Dominican nationality, if they deserve it. He adds: "In fact, individuals who have voluntarily left the Dominican Republic are entitled to return and apply for residential status." For the DR now to claim that they are magnanimously allowing
people to stay, and that they will allow those who "voluntarily" departed to return, if only they can regularize their immigration status, is
truly an abuse of logic. Orwell would smile knowingly.

The misdirection includes the ambassador's assurance that "the Dominican Republic will continue to support its immigrant community, including providing access to free public services, such as healthcare and education." Sounds good, right? Leaving aside questions about the quality of such services, the point of such a statement is to "other" the people involved. It is not, in this view, a story about Dominicans who were suddenly told to prove that they are truly worthy. It is about the DR's "immigrant community."

The letter and policy advocacy by the Peace Corps returnees have started to make a serious difference. The DR government finds itself under an increasingly unflattering spotlight, called out for its actions in dealing with this self-inflicted problem. Although the U.S. government is unlikely to cut funding for the DR in response to this increasingly worrisome situation, greater public awareness could generate sufficient pressure to cause a change in policy, to the benefit of a very vulnerable community.

Monday, August 17, 2015

by Michael Dorf
A recent anonymous article in the NY Review of Books argues that none of the now-conventional accounts of the rise of ISIS in fact explains the phenomenon. Casting aside the lesson of dozens of insurgencies since ancient times, ISIS seeks and holds territory while engaging in combat with regular militaries. ISIS picks fights it seemingly cannot win, and wins or at least survives. Although ISIS now has substantial funding from extortion, looting, oil, and foreign donations, it began with very little money, and was not well-positioned relative to other jihadi groups. I cannot do justice to the article, which I urge readers to examine for themselves. The author concludes that "we should admit that we are not only horrified but baffled." One is left much like Shakespeare's Othello in puzzling over the motives for Iago's evil, while Iago spits "Demand me nothing: what you know, you know."

But is there really that little that we know? Despite its astute observations, the anonymous NYRB article puzzles over some matters that really oughtn't to be puzzles at all. For example, the article traces ISIS to the organization--previously known by many names, most recognizably "al Q'aeda in Iraq"--built by Abu Musab al-Zarqawi (né Ahmad Fadhil). An American airstrike killed Zarqawi in 2006 but the NYRB article makes a persuasive case that despite the fact that ISIS has only received substantial attention in the last few years, its core was in place before Zarqawi was killed. The anonymous author marvels at how a man of so little talent could build such a horrifyingly successful organization. Yet this is perhaps the least baffling question of all. Pol Pot was an undistinguished and flunking student in Paris before rising to infamy. Who would have predicted Mullah Omar's evil success based on his early life (or what is known about it)? Or Hitler's? History is full of mediocrities from humble beginnings achieving world-altering evil chiefly through a talent for ruthlessness.

The more genuine mystery on which the anonymous NYRB article focuses is the success of ISIS in recruiting people from a wide variety of societies. The article notes:

At first, the large number who came from Britain were blamed on the British government having made insufficient effort to assimilate immigrant communities; then France’s were blamed on the government pushing too hard for assimilation. But in truth, these new foreign fighters seemed to sprout from every conceivable political or economic system. They came from very poor countries (Yemen and Afghanistan) and from the wealthiest countries in the world (Norway and Qatar). Analysts who have argued that foreign fighters are created by social exclusion, poverty, or inequality should acknowledge that they emerge as much from the social democracies of Scandinavia as from monarchies (a thousand from Morocco), military states (Egypt), authoritarian democracies (Turkey), and liberal democracies (Canada). It didn’t seem to matter whether a government had freed thousands of Islamists (Iraq), or locked them up (Egypt), whether it refused to allow an Islamist party to win an election (Algeria) or allowed an Islamist party to be elected. Tunisia, which had the most successful transition from the Arab Spring to an elected Islamist government, nevertheless produced more foreign fighters than any other country.

The sickening revelation that ISIS systematically set about capturing Yazidi women and girls to be given as sex slaves to its fighters offers one window on its recruiting success. For young men willing to believe that God smiles on the rape of polytheists, the prospect of sex slaves in the here and now is perhaps more tempting than the 72 virgins that await each of them after martyrdom. Yet surely the lust for female sex slaves fails as an all-purpose explanation for ISIS's recruiting success, at least for young straight American women. As reported today, about 550 Western women and girls have joined ISIS.

Writing in the NY Timeson Saturday, Roger Cohen expresses puzzlement that echoes but does not reference the anonymous NYRB article. Cohen cites another, earlier NYRB essay, Mark Lilla's review of Michel Houellebecq's novel Soumission, which portrays a future Islamic France. As Lilla writes, Soumission "is about a man and a country who through indifference and exhaustion find themselves slouching toward Mecca." Cohen sees the appeal of ISIS as of a piece with a broader phenomenon that he associates with Putinism and, more broadly still, with disaffection with what the disaffected regard as the false promises made by a free society. Even as Cohen expresses bafflement in the face of complexity, he unwittingly channels George W. Bush simplifying the ideology of our enemies to they hate us because of our freedom.

In any event, Cohen's concern is less immediate than the concern expressed in the anonymous NYRB article. Cohen is trying to understand the appeal of radical fundamentalism in general. The NYRB article focuses on the appeal of ISIS in particular. It accepts that some large number of people will be drawn to radical and/or reactionary ideologies but asks why this one?

Let me give what I think is at least part of the answer: People are drawn to ISIS for the same reason that people are drawn to the candidacy of Donald Trump: Because they are the MOST radical.

If one surveyed the field of Republican candidates a couple of months ago and asked where there was room for someone to make his mark, it would not have been immediately apparent that the answer would be to the right of the field on immigration and sexism. And yet, that approach has thus far worked for the Donald because in appealing to angry alienated people looking for someone to "stand up" for them, the loudest most obnoxious guy wins.

Likewise with ISIS. If one looked around at radical Islamist groups a dozen years ago, it would hardly have seemed obvious that the appeal of the Taliban, al Q'aeda etc. was limited by their moderation. And yet, outflanking al Q'aeda and everyone else on the brutality side has worked for ISIS because in appealing to angry alienated people looking for someone to "stand up" for them, the most radical group wins.

Needless to say, there are important differences between Trump supporters and ISIS recruits, the most obvious being that the latter but not the former behead people. Moreover, Trump can only succeed in his quixotic quest for the presidency by persuading a majority of voters that he's the man for the job--an impossible task given his (understandably) high negatives even among Republicans. ISIS, by contrast, is building a totalitarian theocracy, and while even non-democratic regimes depend on some level of public acceptance, that acceptance can be coerced. Accordingly, whereas Trump's negatives will keep him out of power, ISIS can thrive even as most people in the territory it controls and beyond revile it.

In the end, then, the puzzle of the rise of ISIS is not so puzzling, once one understands that people have long been willing to brutalize others in pursuit of ideologies that they embrace--including both secular and religious ideologies. The core and pressing puzzle remains the obvious one: How to combat ISIS? Unfortunately, no one--and least of all Donald Trump--has yet given a good answer.

Friday, August 14, 2015

Toward the end of that post, I reported that the Australian government had set up what is technically known as a "fully-funded" retirement system, in which workers are required to deposit money into savings/investment accounts, which are then managed by the government without further input from the worker. As I described it, the Aussie system "essentially gives workers zero control over how their savings are
invested in the financial markets. Imagine a nationwide system in which
Social Security payroll taxes go into a single mutual fund, and the
best financial managers invest the funds with relatively low management
fees. The evidence indicates that the Aussie system works rather well."

According to an Australian reader, my facts are wrong -- but it turns out that they are wrong in a way that supports my larger point. Here is the comment (which was actually posted to my Tuesday post, to which the Thursday post was connected):

As an Australian reader of your column, I suggest that some of your
comment might benefit from review. First, while it is true that a
proportion of each worker's salary is placed into a fund, there is a
choice of funds, some of which charge higher fees than others without
any apparent higher benefit. So there is not a 'single national fund' so
much as a 'single national program'. Moreover, this program allows a
worker to choose his investment strategy (high growth, cash and bonds
and so on) and leaves his wealth at retirement as an outcome of the
decisions he makes about investment options. In other words, the risk
lies with the individual. Second, there is no guaranteed pension for
Australian residents other than for those who have less than a
determined level of wealth (not including the family home). So for many,
including me, the only income I have is what I am able to make from my
own wealth, and with low % income over the past few years (2 - 3%) a
couple needs about $2M in invested capital to have an income of about
$60k to $70k/year. Virtually no-one has this level of capital. Finally,
this scheme does not take account of longevity 'risk' - that is one
outliving one's pension balance. All in all, I'd prefer to put my
pension fund into a collective account and have a determined and
guaranteed payment. The Australian arrangement may be artful, but it is
also uncertain and may be contributing to a very conservation investment
profile within Australia that is fettering industrial innovation and
economic activity and development."

This is certainly bad news for Australian workers. And as I noted above, that bad news helps to explain why the U.S. Social Security system is so superior to the alternatives. We know that our ERISA rules -- as complicated and onerous as they are -- do not actually protect workers from the various pitfalls of putting one's retirement money into private savings. When Australia's conservative government installed this very neoliberal plan, they essentially adopted a super-ERISA set of protections. Even so, they still left workers to "manage" their investments, leaving the risk at the individual level. Moreover, as the Australian commenter notes, a decent retirement is simply not achievable under this plan.

That is not to say that there couldn't be a system of private accounts that is exactly as I described in Thursday's post. But the real-world example that I provided in the post turns out, sadly, not to live up to that standard, and it instead stands as a cautionary tale for those who imagine that Social Security can be easily mimicked in a system of private accounts.

by Michael Dorf
In a recent speech, Jeb Bush accused Hillary Clinton of standing "by as th[e] hard-won victory by American and allied forces [in Iraq] was thrown away." It's tempting to react to this accusation by reaching for something breakable to throw, given the inanities embedded in the statement.

First, by 2009, sectarian violence had declined from its peak a few years earlier but that hardly counts as "victory." Second, the suggestion that, if only the U.S. had maintained a large force in Iraq a few years longer, Iraq would now be a multi-ethnic paradise, is delusional. Third, the claim ignores the fact that the troop draw-down under Obama (and Clinton) proceeded on a timetable that George W. Bush had himself set. Fourth and most galling of all, to suggest that the rise of ISIS and the broader Sunni/Shiite conflict throughout the Middle East is chiefly the result of a former Secretary of State's failure to urge prolonging the U.S. combat mission in Iraq, while ignoring the Bush family's disastrous destabilization of the Middle East over the course of two decades--beginning with Jeb's father's mixed signals that led to Saddam's invasion of Kuwait, then to the first U.S. invasion of Iraq and the continuing presence of U.S. troops in the Gulf after its inconclusive end, then to Jeb's brother's determination to manufacture pretexts for invading Iraq again, exacerbated by incompetent prosecution of the post-war effort--is really too much.

Nonetheless, for all of that, if we disregard the messenger, there is something to the message. The Obama Administration inherited a mess from the Bush Administration but to paraphrase Rummy, you don't get out of a war from the country you wish you had invaded; you get out of the war you were in. And yet the Obama policy in the Middle East has always had an element of magical thinking about it. He wouldn't have invaded Iraq in 2003 both because it was a "dumb war" on its own and because doing so meant diverting attention from Afghanistan, where the U.S. dropped the ball in the fight against al Q'aeda. Fair enough. But Obama took office in 2009, by which time the situations in Iraq and Afghanistan were the product of the intervening years. And yet, in its overall shape, the Obama policy looked like it could only succeed with the help of a time machine: Extricating the U.S. from the conflict in Iraq and re-focusing on the battle against al Q'aeda wouldn't magically undo the damage that had been done in the Bush years. If jihadis operating out of Afghanistan were the main Middle Eastern threat to U.S. security in 2003, by 2009 it's at least arguable that the chaos unleashed in Iraq should have been a higher priority. (I remember reading something in the New Yorker to this effect some years ago, perhaps by George Packer, but I can't seem to find it.)

None of this is to say that Obama had any good options in Iraq or Afghanistan. In both, the choice was always between staying indefinitely, with our military presence fueling the anger and aiding the recruitment efforts of the groups we're fighting, and leaving, with a bloody civil war following. When all is said and done, leaving looks like the less bad option, at least from the U.S. perspective, which is how one expects a U.S. president to see things.

Meanwhile, of late Republican (and some Democratic) hawks have displayed a penchant for magical/time-machine thinking of their own. In criticizing the nuclear deal with Iran, they argue that Iran drove a harder bargain than the U.S. If only John Kerry had taken a firmer stance in negotiations, he would have gotten anytime-anywhere inspections or whatever.

Maybe that's even true. Perhaps Donald Trump's Secretary of State--or the Donald himself!--sitting at the bargaining table would have so terrified Javad Zarif and his overlords that they would have given over the keys to the nuclear kingdom. But now that that didn't happen, criticizing the deal that was obtained won't unring the bell. Rejection of the Iran deal will not enable the current or any future Administration to go back to the bargaining table and get a more U.S.-favorable deal because the U.S. will not have the leverage that it had at the last set of negotiations, with the backing of the G5+1.

Maybe, and even this is a big stretch, the UK, France, and Germany could be persuaded to maintain sanctions during renegotiations, but there's no chance that Russia and China would. Put simply, the choice is not between the deal that was reached and the status quo ante but between the deal that was reached and a no-deal/no-effective-sanctions regime. Unless the hawks have a time machine.

Thursday, August 13, 2015

In my Verdict column earlier this week, I ran through a few of the most common arguments that conservative critics of Social Security repeat ad nauseum, showing each of those arguments to be based on nothing more than an inability to understanding basic accounting. I then used the most obviously false of those arguments -- that Social Security is a Ponzi scheme -- to frame Tuesday's Dorf on Law post, in which I explained how and why private savings accounts are no more "real" than Social Security's finances, including the much-misunderstood retirement trust fund.

People imagine that banks hold piles of money, whereas Social Security supposedly spends its money right away. In fact, both banks and Social Security send their money right back into the financial system as soon as they receive it, yet both are able to keep the promises that they are making. The argument should not be about whether one set of promises is more real than the other, because it is legal and ultimately political commitments that underlie the ability of all financial promises to be kept. If those commitments change, then of course outcomes could change, too.

I should note that, in nearly all of my writings to date, I have taken as a given the claim that Social Security's promised benefits come with a legal asterisk, that is, that the "promise" that Social Security has made goes like this: "Based on your earnings trajectory, you will receive $X per month in benefits when you retire (setting aside early or late retirement). BUT, if the trust fund ever runs to zero, your benefit will be reduced by y%." That is the asterisk that Social Security includes on its official forms (although it has been impossible for me to find out when or why they started to do that), and the annual trustees' report provides the latest best guess of whether and when the trust fund will reach zero, and if so, the value of y. (The trustees' most recent preferred estimate says yes, it will run to zero in 2034, and y = 21%.) For reasons that I will explain in a future column and/or post, even that is not necessarily true. That is, it is an open legal question of whether there will be any reduction in benefits, even if the trust fund goes to zero. This means that it might not end up being true that low-information people will be unpleasantly surprised by a one-time cut in benefits.

That explanation, however, will have to await another day. Today, I published a new Verdict column, in which I discuss recent proposals by progressives to go on the offensive regarding Social Security. That is, rather than remaining in a reactive, defensive crouch, simply responding to baseless attacks on Social Security in an effort to maintain the status quo, some of the highest profile progressives in Congress (currently led, of course, by Senator Elizabeth Warren) and the leading left-leaning policy think-tanks (especially the Economic Policy Institute) have started to demand an economically progressive increase in Social Security benefits. This is a fight that was started by now-former Senator Tom Harkin before he retired.

In today's column, I do not go through any of the details of the Harkin proposal, and I will not do so here. (It is a safe bet, however, that I will get to that soon enough.) Instead, I simply describe the Harkin proposal as increasing benefits for lower- and middle-income retirees, fully financed by progressive revenue increases. In fact, the proposal also includes financing provisions that would make the "21% cut in 2034" thing moot, so that it deals with any lingering concerns about the current system and then adds a paid-for progressive expansion of benefits.

My column runs through the obvious reasons that such a plan is needed. More and more people rely on Social Security, mostly because of the rise of inequality (which, from workers' standpoints, has simply meant stagnant real wages since about 1980 or so), which prevents workers from saving enough money in defined-contribution savings plans. The Social Security system is quite modest, currently providing retirement benefits averaging $16,000 per year ($1,333 or so per month). This proposal thus provides the late-in-life component to complement the "America needs a raise" movement.

The centerpiece of today's column, however, is the question of whether it is too politically risky even to offer to open up the Social Security system to a hostile Congress. Perhaps the best that can be done is to continue to play defense. I ultimately come down in favor of giving it a try now, but I concede that it is a reasonable argument either way. Interested readers can read more in the column. For the balance of this post, however, I want to discuss the other major issue that I discussed in today's column, which is whether it would make sense to expand retirement savings, through any of a variety of policy levers.

As I point out in the column, the idea of giving people more incentives to save will not solve the immediate problem facing Social Security-reliant retirees. For that matter, any such plan would really begin to make a difference only for people who are a few decades away from retirement. Even so, imagine that the Harkin proposal goes nowhere, for any of a number or reasons, but Congress considers a long-term expansion of retirement savings incentives. Or imagine that we decide both to increase Social Security benefits right away and also to put more savings incentives in place, in the hope that the future private savings might allow a future Congress to peel back some of today's increases.

Either way, we are left with the dilemma of how to structure a plan to enhance the private retirement savings that are currently driven through the various 401(k)-type programs. In a pair of Dorf on Law posts earlier this summer (here and here), I described the fundamentals of such "neoliberal" approaches to retirement security. As I noted, any serious effort to expand private retirement saving has to confront the many cognitive biases that distort the decisions made by even the most savvy people. (In the second of those blog posts, I noted a Harvard economics professor's description of his own far-less-than-optimal behavior when it comes to planning for his retirement.) My argument there was, in one sense, reductio ad absurdum; that is, I argued that doing everything necessary to fight the various cognitive biases would end up creating a system that would look very much like Social Security, but with higher administrative costs (and, by the way, without the progressivity).

Imagine, however, that we are for some reason committed to using private savings accounts to enhance retirement security. What would that actually look like? An article (forthcoming in the Indiana Law Journal) by Marquette Law School Professor Paul M. Secunda provides an interesting positive description of such a system, as well as a normative case for adopting that system.

Professor Secunda first describes why the current 401(k)-led system has been a disaster. The bulk of his analysis describes the behavioral economic case against relying on people to make long-term decisions in a complicated environment. I have expressed serious doubts (here and here) about the usefulness of so-called Behavioral Law and Economics (BLE), but I have always acknowledged that BLE at least tries to confront real-world phenomena that standard economic theory ignores.

In any event, Professor Secunda lays out a very convincing case for what he calls "paternalistic workplace retirement plans." They are paternalistic precisely because even some Harvard Economics professors cannot fend for themselves in a wide open financial system. All kinds of restrictions are necessary. Professor Secunda then points out that such a restrictive model actually exists: Australia's "Superannuation Guarantee," which essentially gives workers zero control over how their savings are invested in the financial markets. Imagine a nationwide system in which Social Security payroll taxes go into a single mutual fund, and the best financial managers invest the funds with relatively low management fees.

[Update: It turns out that my description of the Australian system in
the paragraph above might best be described as a rose-colored-glasses version of reality. I
have written a short update here, explaining how some additional information changes my assessment of the Australian system, but in a way that supports my larger argument in this post. Certainly, I no longer can confidently stand by my positive statements in the paragraph immediately below this one.]

The evidence indicates that the Aussie system works rather well. Professor Secunda, for his part, concludes from his investigation that we should use the Australian approach to reform our current 401(k)-led world, to the benefit of workers. I wholeheartedly agree. Opponents of Social Security, however, could point to Australia's system as proof that private accounts could do all of the work that Social Security does here in the U.S. Is that the right lesson?

At best, there is a possible argument that, if we were starting from scratch, we could design a "fully funded" retirement system that is functionally equivalent to a pay-as-you-go system. But we are not starting with such a system, and as I argue in my Verdict column today, even a proposed expansion of Social Security needs to be pay-as-you-go. Moreover, Professor Secunda's argument supports my point that the only safe alternative to Social Security would be a backdoor Social Security system.

Even if we ignore immediate needs and think only about making changes that would have good effects decades from now, therefore, there is no good case to change the current Social Security system. And if we want to enhance future retirement security, simply adding onto the current system would have the lowest administrative costs, and it could accomplish everything that an Aussie-style system would achieve. The current system in the U.S. is readily expandable. The only question is whether the people who oppose the system on ideological grounds will succeed in destroying it, or at least will end up forcing us to adopt an add-on system that squanders Social Security's systemic advantages.