www.elsblog.org - Bringing Data and Methods to Our Legal Madness

26 March 2006

The HRPP Blog offers a perspective on IRBs from someone with
extensive experience as an IRB professional and consultant. There are only a
small number of posts so far, but he does comment on the Center for Advanced Study's "Mission Creep" white
paper in the most recent one.

Is change afoot in the regulation of social science research
by IRBs, especially low risk research involving surveys, interviews, and the
use of already existing data? One source of change could be the intensifying scrutiny
of IRBs. In April 2003, the Center for Advanced Study held a conference on IRBs
and published a white paper on the topic, Improving the System for Protecting
Human Subjects: Counteracting IRB “Mission Creep.” Next month at Northwestern University,
Philip Hamburger (Columbia)
and James Lindgren (Northwestern) are holding a conference on the First
Amendment implications of IRB regulations. If academic conferences can provoke
change anywhere, maybe it’s in academia. Another possible source of change is the
increasing interest of legal scholars in empirical research, which means more
IRB oversight of their work. Presumably, regulating legal scholars increases
the likelihood of an unhappy researcher filing a lawsuit against an IRB -- and
perhaps making the arguments generated at these conferences to a court. Whether changes
are likely and what changes are desirable are two of the topics up for
grabs this week.

During this second Blog Forum, four guest bloggers will
offer their perspectives on IRBs. Kristina Gunsalus of the University of Illinois’
Office of University Counsel and the College of Law (Urbana-Champaign)
has written widely on the topic of IRBs. She is the primary author of the
Center for Advanced Study’s white paper. Mark Hall and Ron Wright of Wake Forest University’s School of Law
have experience from the law school side of dealing with IRBs. Professor Hall
specializes in health care law and policy. Professor Wright specializes in criminal justice and administrative law. Jack Katz of U.C.L.A.’s Sociology Department is
a participant in the upcoming conference at Northwestern University.
He specializes in social psychology; ethnographic methods; urban communities; and crime, law and deviance.

At least for now in this hypothetical world (FN1), travel
plans on private roads are not covered at all, and many travel plans on public
roads are exempt from the DMV’s oversight. If it involves travel on public
roads, however, only the DMV can judge it as exempt. Some common plans are
pre-approved as exempt, e.g., travel to local grocery stores along certain
routes. But you probably still need to send an e-mail to your DMV notifying them
of your intent to use these pre-approved plans. They’d like to know what you’re
up to.

The utility of the empirical verification of theories of
judicial decision making is dependent on developing sound measures for key
theoretical concepts. This may seem to be so obvious a statement as to be
trivial but the truth of the matter is that most scholars most of the time are
not appropriately self-conscious about the issue of measurement. Certainly we
worry about things like inter-coder reliability (something Jason raised in a
post on February 28). Further, there are, of course, all sorts of debates over
how to measure the ideology of judges (a point that is relevant for Jason’s
post on March 2 regarding the Supreme Court Ideology Project). (See, for
example, Christopher Zorn and Gregory Caldeira’s paper “Bias and Heterogeneity
in a Media-Based Measure of Supreme Court Preferences” and Epstein et al.’s
paper “The Judicial Common Space.”) And, how we measure “the law” is something
that is increasingly preoccupying scholars (as Sara’s post of March 13
illustrates).

A recent conversation with Paul Collins about brought the
issue of measurement to my mind again. In particular, Paul argues that a common
measure of salience—the presence of amicus curiae briefs—is really a measure of
complexity rather than salience (in that those briefs have the potential to
bring to the fore heretofore unconsidered policy or legal dimensions). In one
sense, amicus curiae briefs certainly are indicators of salience, at least to
the interest groups or other third parties who are filing them. But, as Sara
Benesh and Harold Spaeth have argued in a 2001 American Political Science
Association conference paper, key to determining an appropriate measure of
salience is thinking about it in terms of the question: to whom is it salient?
In their January 2000 article in the American
Journal of Political Science, Lee Epstein and Jeffrey A. Segal offered a
measure of salience based on media coverage. Saul Brenner and Ted Arrington, in
an unpublished but widely circulated manuscript, evaluated the virtues and
vices of that measure, concluding that a hybrid measure relying both on New York Times coverage and the list of
major cases compiled by Congressional
Quarterly was preferable. In light of the Epstein/Segal and
Brenner/Arrington measures, Sara and Harold developed a prototype measure based
on the syllabus of a case. Their argument is that the syllabus, though prepared
by the Reporter of Decisions, Deputy Reporter of Decisions, and Assistant
Reporter of Decisions, must be approved by the majority opinion author and,
hence, is a better basis on which to assess salience to the justices themselves.

The lesson I draw from this work collectively is that we
need to think carefully not only about the reliability of our measures but
their validity, too. It may not be a particularly novel or exciting lesson but
it is worth bearing in mind.

23 March 2006

There were earlier suggestions that we should have a new, more-welcoming motto. I also received an email today where, after saying how she enjoyed our blog, the emailer wrote: "[T]his is the worst motto I have
ever heard. These guys are brilliant but tone deaf. " Thus, I am pleased to announce our new motto:

22 March 2006

In the new issue of the University of Chicago Law Review (Winter 2006), Bernard Harcourt (Chicago) and Jens Ludwig (Georgetown) revisit James Q. Wilson's and George Kelling's classic Broken Windows argument. Here is the abstract:

In 1982, James Q. Wilson and George Kelling suggested in an influential
article in the Atlantic Monthly that targeting minor disorder could help reduce
more serious crime. More than twenty years later, the three most populous
cities in the United States -- New York, Chicago,
and, most recently, Los Angeles
-- have all adopted at least some aspect of Wilson and Kelling’s theory,
primarily through more aggressive enforcement of minor misdemeanor laws.
Remarkably little, though, is currently known about the effect of broken
windows policing on crime.

This semester I am teaching a seminar on judicial politics and behavior. It is, for the most part, one of the standard versions taught by people who primarly teach about and study American courts. Much too narrow, of course, given that it is a one semester class and structured largely by my own intellectual interests, background, and training. (I say this in the interests of full disclosure. My guess is that I don't do any better or any worse than most faculty in this regard.) But, it is a small seminar (with four advanced undergraduates, two first-year graduate students, and two second-year graduate students). The interests of the seminar participants are quite diverse. For example, two of the graduate students are primarily interested in comparative legislative institutions, one is interested in international relations, and one is interested in judicial politics, primarily in the American context but increasingly from a cross-national perspective). Accordingly, I modified my "standard" reading list to enhance the comparative courts readings and to add a section on transnational courts.

Our seminar discussions have given rise to several queries that I would like to pose here.

First, when is it appropriate to examine theories of judicial behavior in the context of, for example, the U.S. Supreme Court or state courts of last resort and when is it more appropriate to do so in a comparative context (whether "comparative" is read as meaning comparison across several countries or as a single country study as long as that county is not the U.S.)?

Second, why is the literature on comparative courts (seemingly?) focused single-mindedly on the theory of separation-of-powers? Is that the only "judicial" behavior of interest in a comparative context?

Third, where do transnational courts (e.g., European Court of Justice) or quasi-courts (e.g., NAFTA's binational panels) fit in the study of judicial politics and behavior (or law & courts or public law or whatever your favorite rubric is)?

As legal scholarship becomes increasingly empirical, legal scholars need to become more aware of and sensitive to relevant scholarly norms that inform empirical work. One underexamined norm relates to expectations concerning data availability and the facilitation of replication. Because replication is central to the empirical scholarship enterprise scholars owe some duty to facilitate replication by others. Such a duty, of course, necessarily implicates access to data (and, as I discuss below, possibly to coding as well). The nature, extent, and contour of that duty, however, are far from clear and, indeed, variation distinguishes various disciplines. Two common scenarios illustrate some of the complexities and nuances.

One scenario involves publicly available data, such as those managed and archived by ICPSR at Michigan. For legal scholars that use such data in published work, one obvious expectation (indeed, requirement) is to identify the specific dataset by its ICPSR number in a footnote (or table note). However, is mere dataset identification enough? For example, because some amount of data preparation and manipulation (e.g., collapsing, filtering, re-coding) is almost inevitable, should legal scholars also be expected to make available their coding? Similarly, should table-specific coding be made available by authors? To do so would reduce the burden on subsequent scholars enormously and facilitate replication, follow-up analyses, etc.

A second scenario is even more delicate as it involves original datasets generated by scholars and not publicly archived. To be sure, it is difficult to overstate the effort necessary to develop a first-class dataset. In light of a scholar's often consider investment of sweat equity and because quality datasets frequently support multiple articles, what obligations attach to scholars in terms of facilitating replication efforts by others? On the one hand, a mechanical requirement to release the complete dataset, including data not yet published on (a "total disclosure duty" position), would likely deter dataset building efforts, at least at the margins. At the other extreme, general scholarly norms would assuredly resist a "no duty to disclose" position as some amount of disclosure and data availability are necessary so that findings can be vetted and knowledge advanced. To be sure, considerable middle ground separates these two polar positions.

I welcome thoughts and perspectives on how empirical legal scholars should resolve these (and related) issues and navigate through this uncertain and evolving terrain.

What factors influence judicial agreement versus dissensus--the choice to write a separate dissenting or concurring opinion? A number of variables may play a role: Ideology, Collegiality, Trial Court experience (of interest to Martinek's earlier post), and the high costs of dissenting due to increased workload. (See, e.g., Edwards 1998; Czarnezki/Ford forthcoming 2006; Benesh/Spaeth forthcoming; Hettinger/Lindquist/Martinek 2003). While legal factors have not been shown to play a major role, there may be consequences between using directional vote versus agreement as the dependent variable.

In a work-in-progress with Bill Ford, we examine aggregate disagreement rates on four U.S. Courts of Appeals in an attempt to explain dissensus as a product of these above variables and more. Below is a graph showing the overall dissent rates in the fours circuits since 1946. Preliminary results as to what variables correlate with these rates are pending... stay tuned for Part II of this post. (And if interested, the data will presented Sunday morning at the Law & Society conference in Baltimore for the panel "How Judges Make Decisions.")

21 March 2006

Three and four decades ago, students of the courts such as
Walter Murphy and S. Sidney Ulmer paid particular attention to how the group
context shaped judicial behavior and court outcomes, drawing on the political
and social psychology research of the day. However, contemporary scholars pay
attention to the group context for appellate court decision making only to the
extent that it provides an opportunity for strategic behavior. But judges, as
human beings, are profoundly affected by the identities of those with whom they
make decisions and the norms of the environments within which they make those
decisions. I would argue that not all of those concerns are, strictly speaking, strategic in nature.

To illustrate, consider designated judges. The chief judge of a circuit has the authority to temporarily assign (designate) a district court judge from his
circuit to serve as part of an appellate panel. According to data available
from the Administrative Office of the Courts, designated judges account for
upwards of twenty percent of all case participations in the United States
Courts of Appeals. Though some of these designated judges are circuit court
judges serving from other circuits, the majority are district court judges.
A basic question is whether these designated district court judges are fungible
with their circuit court colleagues or if their status as district court judges
affects their ability to be fully equal partners in the deliberative process.
Work that appears in Law & Society Review in 2001 by Brudney and Ditslear ("Designated Diffidence: District Court Judges on the Courts of Appeals") provides some insight into this question, as does earlier work by Thomas G. Walker that appeared in the American Journal of Political Science in 1973 ("Behavioral Tendencies in the Three-Judge District Court).

Think, too, about how the presence of a designated judge may affect the
behavior of the regularly sitting appeals court judges with whom he sits. Much
of the workload of the United States Courts of Appeals consists of cases
appealed from the federal district courts. Do the regularly sitting appeals
court judges reviewing a district court ruling in the company of a designated
district court judge approach that review in the same way or do they suppress
the error correction function inherent in the appellate court process in
deference to their district court colleague?

I am intrigued by this line of inquiry precisely because it represents an empirical question the answer to which has important implications for a normative concern. (Or, if you like, a normative question to which empirical evidence can be brought to bear.)

Sara is an Assistant Professor of Political
Science at the University of Wisconsin-Milwaukee. She received her PhD in 1999 from Michigan State University, held a tenure-track
position at the University of New Orleans for two years,
and joined the faculty at UWM in 2001. Her research focuses on the relationship between the Supreme Court and
the U.S. Courts of Appeals, and she is the author of “The U.S. Court of Appeals
and the Law of Confessions: Perspectives on the Hierarchy of Justice” (LFB
Scholarly) and co-author of “The Supreme in the American Legal System” (with
Jeffrey Segal and Harold Spaeth, Cambridge University Press). She is also Co-PI (with Harold Spaeth) on the
National Science Foundation-funded “Justice-Centered Databases,” a revision to
the celebrated Spaeth Databases. Sara teaches courses in civil rights and civil liberties,
judicial behavior, and political methodology at both the graduate and
undergraduate levels.

Mark Graber, the chair of the Law and Courts section of the American
Political Science Association, has announced the recipients of two
awards:

The 2006 American Judicature Society Award for "the
best paper on law and courts presented at the previous year's annual meetings"
goes to Kevin McGuire and Georg Vanberg for their paper, "Mapping the
Policies of the U.S. Supreme