www.elsblog.org - Bringing Data and Methods to Our Legal Madness

11 July 2007

So says Alan Blinder, professor of economics at Princeton and former vice-chairman of the Federal Reserve Board, in this article from today's New York Times. Here is an excerpt.

“There is much too much ideology [among professional economists],” said Alan S. Blinder ... . Mr.
Blinder helped kindle the discussion by publicly warning in speeches
and articles this year that as many as 30 million to 40 million
Americans could lose their jobs to lower-paid workers abroad. Just by
raising doubts about the unmitigated benefits of free trade, he made
headlines and had colleagues rubbing their eyes in astonishment.

“What
I’ve learned is anyone who says anything even obliquely that sounds
hostile to free trade is treated as an apostate,” Mr. Blinder said.

And
free trade is not the only sacred subject, Mr. Blinder and other
like-minded economists say. Most efforts to intervene in the markets —
like setting a minimum wage, instituting industrial policy or
regulating prices — are viewed askance by mainstream economists, as are
analyses that do not rely on mathematical modeling.

That
attitude, the critics argue, has seriously harmed the discipline,
suppressing original, creative thinking and distorting policy debates.
“You lose your ticket as a certified economist if you don’t say any
kind of price regulation is bad and free trade is good,” said David
Card, an economist at the University of California, Berkeley, who has done groundbreaking research on the effect of the minimum wage.

Obviously, we here at the ELS Blog are supportive of any efforts to use empirical research to check our fundamental assumptions. Because of the huge influence of economics on public policy, this internal debate is a very healthy development for all of us.

08 July 2007

In an article questioning how much credit Rudolph Giuliani deserves for reducing crime in New York City, today's Washington Post discusses the research of economist Rick Nevin. Nevin argues that variations in violent crime are significantly related to childhood exposure to lead. He has studied crime rates and lead exposure in nine countries and concluded that "[s]ixty-five to ninety percent or more of the substantial variation in
violent crime in all these countries was explained by lead."

The Post article is here. A more detailed discussion of Nevin's research is here.

29 June 2007

Fresh from my mailbox, the July issue of the ABA
Journal has an article on the ideological drift of Supreme Court justices.
Author Richard Brust spoke with Lee Epstein of Northwestern University.
Along with co-authors Andrew Martin, Kevin Quinn, and Jeff Segal, she has an
article on this topic scheduled for an upcoming issue of the Northwestern
Law Review.

“[S]ays Epstein, ideological drift is the rule rather than the exception. Of
the justices appointed since 1937 -- the rise of the New Deal court -- almost
all have grown either more liberal or more conservative during their tenures.
Some have shifted several times. ‘Very few have not drifted,’ says Epstein. Of
the 26 justices over the last 70 years who served for at least 10 terms, only
four can be seen to have stuck to their original ideologies.”

Which four justices remained anchored to their original ideological dispositions? The
answers are (after the break):

12 June 2007

In a recent paper, Should Legal Empiricists Go Bayesian?, Jeff Strnad (Stanford) makes a case for Bayesian (rather than frequentist) approaches that should interest Bayesians as well as non-Bayesians. Jeff makes the particular point that Bayesian models can "enable a much more natural connection between the normative or positive issues that typically motivate such studies and the empirical results." He concludes that Bayesian methods "have much to offer legal empiricists." Despite the paper's "heft" (108 manu pp.--you've been forewarned), it is well worth a read.

07 June 2007

The variety of topics to which ELS can profitably be applied--as shown, for instance, by the recent AALS conference and by the successful ELSconferences--is, of course, both encouraging and daunting. As one perhaps atypical example, a month or so ago there was an interesting exchange on the PropertyProfs listserv, looking for examples of empirical legal articles examining property issues. I am certain there are more than the ones identified, as this request focused on real property--and I hope readers might identify more, please!--but the ones that were mentioned are excellent examples of the sort of useful ELS work that can be done. Thus, in no particular order, here are the ones mentioned:

30 May 2007

One wonderful thing about this blog is the opportunity to learn about various different methodological approaches available for data analysis. The workshops on methodologies and statistics (e.g. Northwestern's) are enormously helpful, and references for teaching and catching up on statistics have been as well. Experimental work is very important, and relatively uncommon approaches such as MDS, network analysis, propensity score analysis, and others can give good insight into what is going on with both conventional and unconventional data.

But I'd like to highlight one approach that cuts across many of these, and in my view is one of the most important contributions ELS can make. I'm a big fan of meta-analysis--of the quantitative synthesis of existing research studies--and advocate its use by ELS scholars, courts, agencies, practitioners, ice cream truck vendors, etc.

I've made some of my case for meta-analysis in a piece coming out in Temple Law Review, so I will be brief here. First, descriptively, meta-analysis is just that--analysis one level up. That is, primary research uses individual units (person, cases, courts, etc.) as the unit of analysis. The individual units that meta-analysis involves, though, are the empirical studies themselves from a particular body of research. Synthesizing the results of each study, the goals of meta-analysis are (1) to identify the presence or absence of an effect in an existing empirical literature; (2) to evaluate the strength of that effect, for instance by summarizing the average effect across a set or subset of studies; and (3) to identify moderator variables, elements of the various studies that might have reliably affected their outcomes. Thus, a good meta-analysis will identify and synthesize every study in a research area; find the average effect across all those studies in order to summarize the most current state of knowledge; and then systematically compare and contrast across studies, in order to identify methodological and substantive aspects of the various studies in a discipline that might be associated with the effect sizes they report.

Meta-analysis has a number of benefits, especially relative to the traditional narrative lit review: it is more comprehensive; it better avoids subjective judgments about what to include in a review; it is better at identifying "small" effects; it avoids over-emphasis on misleading statistical significance and p-values (focusing on effect sizes instead); emphasizes moderator variables; and, in my view, gives a better grounding for policy inferences.

Meta-analyses, unfortunately, are still rare in law reviews (Chris Guthrie and Dan Orr had one recently, but few others appear; they are cited sometimes, though). They are used in court, usually in mass tort cases, but are often misunderstood by courts and experts. My hope is that use of meta-analyses in ELS will increase--they serve well to summarize a body of research; they allow (or force) researchers and policy-makers to speak a "common language"--by making cherry-picking from existing studies more difficult--and, identifying moderator variables, they can prompt further research questions.

16 May 2007

In a very interesting new article forthcoming in Georgetown Law Review, Chad M. Oldfather of Marquette University Law School discusses whether writing really or necessarily makes judicial decisionmaking better, in light of psychology research and experimentation suggesting that under certain conditions a phenomenon of verbal overshadowing makes decisionmaking worse. This article is an interesting companion to David Hoffman, et al.'s recent scholarship, discussed here , on when and why judges write opinions.

The abstract of Professor Oldfather's article follows:

Prior commentators, including many judges, have observed that writing provides an important discipline on the judicial decisionmaking process. Those commentators have uniformly assumed that the effect will always be positive - that is, that a decision rendered pursuant to a process that includes a written justification will always be better (however better is to be measured) than a decision unaccompanied by writing. According to this view, we should always, all things being equal, prefer a decision accompanied by an opinion to one without. All things are not equal, of course, and there are many situations in which the costs of generating an opinion uncontestably outweigh the benefits - such as in the case of evidentiary rulings made during the course of trial. Still, the understanding remains that writing will result in some positive contribution to the process.

This article calls that assumption into question. Drawing upon an emergent body of psychological research into the effects of both oral and written verbalization on decisionmaking effectiveness, it argues that certain types of decisions are likely to be worse if made via a process that incorporates writing. Decisions involving complex, context-intensive judgments that are best resolved via the weighing of largely inarticulable considerations are susceptible to a phenomenon called verbal overshadowing. In these situations attempts to justify a decision can lead the decisionmaker to focus on more readily verbalizable features of the problem to the exclusion of those inputs that are more important to proper analysis.

The article also investigates the significance of writing to the fulfillment of the other two (aside from accuracy-enhancement)primary functions of judicial opinions, namely the creation and memorialization of precedent and the enhancement of legitimacy, and to consider the differing ways in which these functions are implicated at the trial and appellate levels. The goal is not so much to generate definitive answers as to better identify the costs and benefits provided by written opinions so as to more completely ground ongoing debates concerning when opinions should be issued, what form they should take, and who should author them.

11 May 2007

In this paper we study the political
economy determinants of traffic fines. Speeding tickets are not only
determined by the speed of the offender, but by incentives faced by
police officers and their vote maximizing principals. Our model
predicts that police officers issue higher fines when drivers have a
higher opportunity cost of contesting a ticket, and when drivers do not
reside in the community where they are stopped. The model also predicts
that local officers are more likely to issue a ticket when legal limits
prevent the local government from increasing revenues though other
instruments such as property taxes. We find support for the hypotheses.
The farther the residence of a driver from the municipality where the
ticket could be contested, the higher is the likelihood of a speeding
fine, and the larger the amount of the fine. The probability of a fine
issued by a local officer is higher in towns when constraints on
increasing property taxes are binding, the property tax base is lower,
and the town is less dependent on revenues from tourism. For state
troopers, who are not employed by the local, but the state government,
we do not find evidence that the likelihood traffic fines varies with
town characteristics. Finally, personal characteristics, such as gender
and race are among the determinants of traffic fines.

The paper is available on SSRN here and discussed in the Chicago Tribunehere. Various accounts of the importance of considering monetary incentives when evaluating traffic enforcement patterns can be found here, here, here, here, here, here, here, here, etc. (H/T to theNewspaper.com for the links in the last sentence.)

Alphabetic name ordering on multi-authored academic papers, which is the convention in the economics discipline and various other disciplines, is to the advantage of people whose last name initials are placed early in the alphabet. As it turns out, Professor A, who has been a first author more often than Professor Z, will have published more articles and experienced a faster growth rate over the course of her career as a result of reputation and visibility. Moreover, authors know that name ordering matters and indeed take ordering seriously: Several characteristics of an author group composition determine the decision to deviate from the default alphabetic name order to a significant extent.

(Many thanks to Danny Sokol for the tip.) Perhaps Mirjam and B.M.S. are married; because of the ordering of the names, I would presume that Mirjam did more work on the article.

02 May 2007

While Michael linked to the paper below, it is also the subject of a NY Times article entitled Study of N.B.A. Sees Racial Bias in Calling Fouls. Two highlights from the article: First, the caption under the headlining photo--"Minnesota Timberwolves guard Mike James, left, said he did not think he was treated differently by white and black officials"--says a lot about the need for systematic empirical inquiry. Second, the NY Times engaged in peer-review--"Three independent experts asked by The Times to examine the
Wolfers-Price paper and materials released by the N.B.A. said they
considered the Wolfers-Price argument far more sound." The article also discusses multivariate regression and Dallas Mavericks owner Mark Cuban's viewpoint on statistics. It's a must read!!!

25 March 2007

Vault, Inc, which is a publisher that collects industry information on various professions, recently released its listing of the Top 25 Most Underrated Law Schools. (Hat tip: Paul Caron & Volokh.) The findings are derived from a survey of 512 law firm recruiting managers, hiring partners, and corporate counsel, who were asked "to name law schools that, based on their experience as hiring managers, are underrated."

As I read the list, I wondered, what is the practical implication of a law school being underrated by the people who make hiring decisions for entry level lawyers? Presumably, it means that graduates of certain law schools tend to perform better than their school's U.S. News ranking would suggest; thus, legal employers are more likely to hire them.

If this is true, what is the source of superior performance? Here are two possibilities:

Stronger Students. Some schools may enroll a stronger student body than their rank might suggest.

Better Education. Some law schools may equip graduates with more or better skills than other schools of comparable rank

Based on some preliminary statistical analysis, there is fairly clear quantitative evidence for the first hypothesis. There is also some qualitative evidence for the second--enough to warrant some additional research.

These research questions are really important to the ranking debate because their answers could reveal a mechanism by which some students will discount USN. If an "underrated" school offers a better entree to coveted legal employers, why go to the higher ranked school, especially if the underrated school costs significantly less? Law schools that understand these dynamics are in a better overall competitive position.

23 March 2007

Last April, we posted the abstract of Robert Rasmussen's (Vanderbilt) working paper, "Empirically Bankrupt," which critiques three empirical papers in the bankruptcy area. The abstract concluded that "[f]or empirical work to be credited, at a minimum, it has to look in the right place, ask the right question and draw the right inferences. When empirical work fails to cross this threshold, it conclusions must be rejected."

One of the articles being critiqued was Elizabeth Warren's (Harvard) & Jay Lawrence Westbrook's (Texas) recent study,
Contracting Out of Bankruptcy: An Empirical Intervention, 118 Harv. L. Rev.
1197 (2005). A few months ago, Warren & Westbrook posted a rejoinder, "The Dialogue Between Theoretical and Empirical Scholarship," that not only addresses Rasmussen's criticism, but also tries to articulate an optimal relationship between theory and empiricism. The entire exchange between Rasmussen and Warren & Westbrook is worth reading. Here is the abstract of the rejoinder essay:

In this essay we offer brief reflections on the best process for critiquing empirical work in law and sustaining an engagement
between theoretical and empirical approaches. We emphasize the importance of theoretical work in helping to shape the scholarly agenda, but we urge that theory should be more closely tied to fact. We illustrate our argument by responding to a recent critique of our own empirical work by Professor Rasmussen. His principal claim is that our work should be discounted because we reported on all business bankruptcies, both those of entrepreneurs and those in corporate form. In response, we reanalyze our data, separating the individuals
from the corporations; in every case the re-analyzed data support the conclusions of our original paper to the same extent or more strongly. Similarly, his other claims about our work are shown to be incorrect.

Kicking off a recent friendly-but-frank
discussion about the relevance of contemporary law review articles,
Dean David Rudenstine of the Benjamin N. Cardozo School of Law offered
one description: "useless blather puffed up with self-indulgence" ...

Judge Sack returned to the not-so-funny problem under consideration:
today's highly theoretical articles are largely ignored and seldom
cited by judges, a dramatic turnabout from a generation ago when the
mostly practical content of law reviews was a significant element of
judicial decision-making. Judge Sack was sorry to say that the bench
now uses law reviews "like drunkards use lampposts, more for support
than for illumination." ...

"If the academy wants to change the world, it must decide if it wants to be a part of the world." ...

Although Judge Sotomayor agreed that brainy law professors should
not trouble themselves by contemplating reactions from the bench to
their writings, she leveled a sharp gaze at the Cardozo Law faculty and
declared, "If you think that judges are not as capable of creative
thought as you are, I beg to differ." She added, "My question to
academics: do you really think you're serving some function to
someone?"

13 March 2007

One article I have been meaning to blog about is this profile of economist Kevin Murphy in the Nov/Dec 2006 issue of University of Chicago Magazine. Murphy has an impressive list of accomplishments, including the John Bates Clark Medal of the American Economic Association, which is given once every two years for the most outstanding American
economist under age 40 (1997), and a MacArthur Foundation "genius grant" (2005).

Murphy is quite a character, and the article provides some fascinating details on his career history. But this passage on co-authorship really caught my eye:

"[Regarding co-authorship,] I think most people get a better product that way,” he explains. “You
pool insights and eliminate oversights. First-rate coauthors make
working on projects more fun and rewarding. Plus, writing is a pain.”
Since earning his PhD and joining the faculty in 1986, he has written
every one of his 60-plus published papers with a former teacher, a
close colleague, or students. ...

As a shy but affable 25-year-old grad student, Murphy confessed to
[his advisor] Topel that his goal at that stage in life was “to become the world’s
best coauthor.” By the time he was 35, he was there.

Wow, that is quite a testament to collaboration. To the right is a picture of Murphy, who is an avid woodworker, in his workshop at home. Apparently, it is impossible to locate a picture of Murphy without his signature baseball cap.

26 February 2007

Paul Caron has a post an interesting forthcoming study by Andrew Oswald (Warwick Economics) in Econometrica on the relationship between journal prestige and citations. See Andrew Oswald, An Examination of the Reliability of Prestigious
Scholarly Journals: Evidence and Implications for Decision-Makers. Since a single body in the UK makes funding decisions for university research, there is significant pressure to rely upon measures of productivity that take into consideration the relative prestige of faculty members' placements. Because these journals are peer-reviewed, and the hierarchy of journal prestige in each discipline is fairly established, placement -- so the argument runs -- should be construed as a strong signal of quality.

Oswald's study is important, though the abstract is -- in my opinion -- quite misleading. This claim caused me to the read the whole study:

The paper finds that it is far better to publish the best article in an issue of a medium-quality journal like the Oxford Bulletin of Economics and Statistics [a middle-of-the-pack journal] than to publish the worst article (or often the worst 4 articles) in an issue of a top journal like the American Economic Review.

Oswald used citation counts from articles in six economics journals over a twenty-five year period to evaluate the reliability of placement as a quality signal. But there is no implicit suggestion in the article that a scholar would be better off trading down.

Basically, it boils down to this: Placement is correlated with citation, but as a proxy for quality, there are large Type I and Type II errors. Articles in AER and Econometrica (#1 and #2 in prestige) did indeed get more citations. Yet, roughly 16% of the articles in the less prestigious journals garnered more citations than the median AER or Econometrica articles. The most cited articles in the less prestigious journals garnered 10 times or more the citations than the four least cited articles in each issue of AER or Econometrica.