www.elsblog.org - Bringing Data and Methods to Our Legal Madness

14 August 2007

Forum Post #3: Methodological Matters

My Ph.D. thesis director once advised me that "if it's worth doing, it's worth doing badly." His point was not to make the perfect the enemy of the good, particularly when conducting truly original research. So it's important to preface any critique of their work by acknowledging that, whatever flaws their study may have, Nance and Steinberg have done a great service to the legal academy by shedding some empirical light on the question of law review publication.

As is the case with any empirical paper, methodological criticisms of their work are among the easiest to offer. One could question their decision to use conventional factor analysis with ordinally-measured response variables (particularly when better techniques exist), or their extensive use of tables when figures (such as the one above; click on chart to enlarge) typically do a much better job of conveying complex statistical results.

My biggest concern, however (and one prefaced by Michael's comment to Bill's first Forum post) is the effect of social desirability bias (hereinafter SDB) on the study's findings. SDB refers to survey respondents' tendency to answer surveys in manners they think are socially (or, here, professionally) desirable or expected of them; it is a well-known and commonly-observed phenomenon in survey research (a recent paper with a list of current references is Streb et al.). I'd contend that the presence and effect of such bias can explain both their intuitive findings as well as some of the more unexpected ones.

Articles editors (AEs) undoubtedly are interested in growing the prestige of their journal, and in minimizing their editorial workload. They are also, however, socialized into the law review culture; they understand that law reviews, as forums for scholarly work, should publish the "best" (most original, creative, important, well-reasoned, persuasive) scholarship they can. As AEs, their professional role is to select such work for publication, and to do so in a way that doesn't systematically disfavor authors or work on the basis of other (putatively irrelevant) criteria. SDB suggests that AE's survey responses will likely reflect their desire to be seen as conforming to that role.

Consider Nance and Steinberg's rather odd finding that, while "Author Prestige" is among the most influential of their constructs, "Notability of the Author" ranks dead last in the rankings of publication criteria. The phrasing of the authors' 56 "influence" questions is such that none is dispositive; each can influence the publication process without making or breaking a given paper. In contrast, asking AEs to rank order the seven publication criteria forces a zero-sum choice: for one criterion to be ranked higher, another must be ranked lower. That, combined with the relatively small number of items to be ranked and the presence of SDB effects, makes it difficult for an AE to place "Notability" high in the rankings.

A similar dynamic might explain the relative weakness of "negative" author traits: while AEs can be forgiven for privileging work by high-prestige authors, it is considered much less acceptable to disadvantage low-prestige ones. Finally, in our post-Grutter world, it is likely that most AEs almost reflexively responded "no influence" to questions regarding the effects of author race and gender.

But while SDB is a potentially serious problem, it is by no means insurmountable. A standard way of assessing the presence of SDB is to compare survey responses with actual behavior; as Michael suggested in his earlier comment, the obvious means of doing this would be to analyze data on actual submissions and acceptances. Barring that, SDB can be reduced in surveys through anonymity; survey respondents who are assured that their answers will be anonymous are typically less affected by SDB than those who can be identified.

Comments

Journal of Empirical Legal Studies (JELS) fills a gap in the legal and social science literature that has often left scholars, lawyers, and policymakers without basic knowledge of legal systems. Always timely and provocative, studies published in JELS have been covered in leading news outlets.
dissertations

It would be interesting to see a comparison of author prestige multiple publications in law reviews and peer reviews.

I suspect the "Sunstein effect" would make law reviews look like they have more author prestige effect, but after introducing a dummy variable for Cass, I'm not sure how much difference you would find.

Jason -- I wouldn't disagree with your characterization of how the process typically works, though I'd also bet that certain law reviews might accept almost *anything* from certain (high-prestige) authors. More generally, though, there are plenty of ways short of experiments to minimize or eliminate SDB (for example, a quick Google search turned up this review):

Dylan: A standard sort of approach to an observational study would be to examine whether (coded = 1) or not (coded = 0) a particular submission was accepted at a particular journal, and to model that as a function of variables describing the article itself (subject matter, length, etc.) as well as the author (affiliation, rank, etc.) and perhaps the journal as well. The statistical approach would be a regression-type model (like a logit or probit). Of course, one would have to know which manuscripts were submitted to which journals, and ideally have some idea of how the expedite process played out in each case. But, there again, even an imperfect study along such lines could tell us a lot about the factors which are influential in the process.

You and Michael both mention the possibility of comparing survey responses with actual behavior. But given the subjective and multi-factored analysis that goes into publication decisions (and the small sample size at top journals), I wonder if you have thoughts about how that might be measured. It seems exceedingly difficult to me to separate out the many not-really-independent variables that make up a publication decision. Even if one found, for example, that authors from less prestigious faculties are underrepresented, wouldn't you have to show by some mechanism that their articles were as "good" as the ones that were published in order to demonstrate that their affiliation was a significant factor in the rejection? I don't have an empirical research background (Jason was the brains behind that part of the operation), so there may be techniques for dealing with this problem, but it seems like a significant barrier to the sort of study you suggest.

As I mentioned in a previous post, self-reporting bias is something social scientists learn to live with, unless they are willing to put in the time, the extraordinary effort, and have the resources to conduct an experimental design to control for unwanted influences. And even then, experimental designs have their own set of problems.

Chris did bring up an interesting finding in our study that I would like to briefly address. It is true that author prestige was among the most influential factors when isolated from the others, but when the editors were asked to rank it against other factors such as persuasiveness of the arguments, originality, etc., it ranked last. We were puzzled by this as well. Although social desirability bias may explain this discrepancy, other plausible explanations exist. The one that seems most reasonable to me is as follows. An author’s prestige, while certainly influential, will not be enough to warrant an offer of publication if the arguments are not persuasive/original, or the article does not have the potential to influence other legal scholarship. Whether students are capable of making that call is open to debate, but if a student can recognize those shortcomings, the author’s notoriety will not save the article. I would be surprised to hear an editor say differently.