www.elsblog.org - Bringing Data and Methods to Our Legal Madness

05 May 2006

A recent discussion on the Statistical Modeling, Causal Inference,
and Social Science Blog (SMCISS Blog?) provides an interesting discussion of
some issues associated with tests of statistical significance. The discussion
was motivated by a hypothetical situation involving a direct mail marketing campaign, the
text of which I have slightly edited:

Group 1: 50,000 customers are mailed a catalog. Of these, 100
made a purchase, and the mean of their spending was $50 with a standard
deviation of $10.

Group 2: 50,000 customers are mailed a catalog. Of these,
120 made a purchase, and the mean of their spending was $55 with a standard
deviation of $11.

Some questions: Is there a statistically significant
difference in the mean purchases associated with these two groups of customers?
What is the appropriate N to use in testing for a difference? (If it’s 50,000,
is there any reason to even bother with the test?) More importantly, for a decision-maker in this hypothetical company, what turns on whether
the difference is statistically significant? These questions are discussed here and then here.

It’s reassuring to find people who are much wiser in the ways of quantitative methods struggling over “basic” questions—primarily, the important question of the value of hypothesis testing in this situation.

"On an entirely different tech front, I showed a friend of mine how she could easily realize her vision of creating Stata-HP, a version of Stata where various commands would be renamed to correspond to spells from Harry Potter.
The "quietly" command, for instance, would be reimplemented as
"silencio." She had embarked on assembling a list of commands and their
spellname equivalents, but I haven't heard back from her since. Maybe
its release can be timed to coincide with the final book in the series,
perhaps with a set of Quidditch statistics to demonstrate it on."

Six years ago Congress passed the little-known Data Quality Objectives Act (DQOA). According to the Newsletter I just received from the Maryland Environmental Law Program and an article in the Maryland Bar Bulletin, "[t]he DQOA is a two-sentence mandate snuck into the fiscal year 2001 federal
appropriations bill as a rider. It simply requires the Office of Management and
Budget (OMB) to adopt mandatory guidance requiring each federal agency to adopt
formal guidelines to ensure that information 'disseminated' by that agency meets
certain data-quality criteria for 'quality, objectivity, utility, and integrity'." But it seems data quality is in the eye of the beholder. Industries, at least in the environmental context, are challenging state data and even potentially inaccurate spreadsheets under federal law.

The article linked to above asks, "Should environmental regulations be based upon a solid scientific foundation and
reliable data before they become final and have the 'force and effect' of law?" This, however, begs all sorts of questions: What data is good enough? Should agencies be allowed to regulate in the face of uncertainty where the risk is unknown but the potential harms may be severe? What processes create good data in administrative agencies? OMB and EPA guidelines for ensuring quality data can be found here and here.

I'd be interested if anyone knows if the Data Quality Act has been litigated in other contexts besides environmental law.

04 May 2006

Although some studies explore possible systematic variation between elected and appointed judges, comparatively less work assesses possible differences flowing from the structure of state judicial elections. With respect to the latter, frustrating such efforts is that most judicial election states elect their judges in a uniform manner. A slight wrinkle exists in Kansas, however, where partisan elections select states judges in 14 districts and noncompetitive retention elections select judges in 17 districts. A recent paper by Gordon (NYU) and Huber (Yale) exploits this unique judicial selection feature in Kansas and explores how it influences sentencing. They find that Kansas judges in partisan election districts sentence more severely than their judicial counterparts in retention districts. Additionally, they provide evidence that suggests this difference arises because of the incentives created by electoral competition, rather than the selection of inherently more punitive judges in competitive districts.

This article adds an empirical perspective to the debate over the use of foreign authority by federal courts. It surveys sixty years of federal court practice in citing opinions from foreign high courts, through a citation count analysis. The data reveals that federal courts rarely cite to foreign decisions, they do so no more now than they did in the past, and on those few occasions where they do cite to foreign decisions, it's usually not to help them interpret domestic law. Instead the citation of foreign decisions is best understood as a relatively rare phenomenon of judicial dialogue in cases where international issues are squarely presented by the facts. The article examines those few cases where federal courts have cited foreign decisions in some detail, and briefly considers some implications of the limited use of foreign decisions by federal courts.

03 May 2006

In an effort to diversify our content and add a second Bill to the blog, Bill Henderson, an Associate Professor of Law at Indiana, will be joining us as the sixth editor and permanent blogger of the ELS Blog. Bill's
primary research interests include the regulation of the financial
markets, class action litigation, and the economics of the legal
profession. Bill, as guest blogger, posted on the lawyer salaries, learning "ELS on the Cheap", and law firm economic geography.

In other news, we also have a fine line of guest bloggers lined up for the summer including, to name a few, Lee Epstein, Howard Gillman, and Christopher Zorn, former Program Director for the Law and Social Science Program at the National Science Foundation. In addition, our Blog Forums will feature Dan Kahan who will write about Yale's Cultural Cognition Project, and Elizabeth Mertz and Stewart Macaulay who will post about The New Legal Realism Project.

The mission of the new Journal of Spurious Correlations is, apparently, to deal with the “file drawer effect” in social science research, i.e., the tendency for journals to publish studies that find significant relationships rather than those that do not. Publication decisions may be biased in favor of the 5% of studies that show relationships purely as a result of chanceand against the 95% of studies that show no relationships. These latter studies end up filed away and forgotten, thereby distorting our knowledge of the true relationships.

A group of social scientists in Europe and the US has
established a new journal of negative and unpublishable results in the social
sciences. The mission of The Journal of Spurious Correlations (JSpurC) is to
provide a legitimate venue for exploring pure and applied methodological
questions in the social sciences in the company of colleagues without fear of
professional embarrassment or reprisal. While a number of the present
organizers are political scientists, such an initiative may be relevant to
other social science disciplines as well, and to a range of methodological
approaches beyond the ‘quantitative.’

The reason for the special emphasis on embarrassment and reprisal is somewhat surprising to me, if the goal is simply to provide a venue for studies that fail to confirm the relationships found in other studies. Perhaps more people than I thought take a failure to replicate their results very personally. The first issue is scheduled for this year.

An advanced graduate student (in political economy) I'm assisting requests suggestions for especially accessible texts (or, for that matter, articles) laying out the basics of survival analysis in general and the Cox hazard model in particular (including explanations of diagnostic issues), with an emphasis on data structure requirements. My initial suggestions, evidently, were not particularly helpful. Consequently, I (or, more accurately, we) welcome suggestions from others.

01 May 2006

A recently published paper by Baldez, Epstein & Martin seeks to assess the possible impact(s) of a federal Equal Rights Amendment. Its abstract:

"For over 3 decades,those engaged in thebattle over the EqualRights Amendment (ERA), along with many scholarly commentators,have argued that ratificationof the amendment willlead U.S. courts (1)to elevate the standardof law they nowuse to adjudicate claimsof sex discrimination, which,in turn, could leadthem (2) to findin favor of partiesclaiming a denial oftheir rights. We investigateboth possibilities via anexamination of constitutional sex discrimination litigation in the50 states—over a thirdof which have adoptedERAs. Employing methods especiallydeveloped for this investigation,we find no directeffect of the ERAon case outcomes. Butwe do identify anindirect effect: the presenceof an ERA significantlyincreases the likelihood ofa court applying ahigher standard of law,which in turn significantlyincreases the likelihood ofa decision favoring theequality claim."

What caught my eye in particular (in addition to an interesting topic and indirect finding) was the clever way the authors approach their research question by endeavoring to leverage variation across states in what amounts to something akin to a natural experimental design. While I would normally hot-link to the paper itself, potential copyright "questions" counsel a mere citation (35:1 J. Legal Studies 243 (2006)) for those interested.