Enter your email to subscribe:

Edward Cheng (Vanderbilt) has posted his paper, "When 10 Trials are Better than 1000: An Evidentiary Perspective on Trial Sampling," on SSRN. It's also available through Penn's website. Here's the intro on Penn's website:

In many mass tort cases, individual trials are simply impractical. Take, for example, Wal-Mart Stores, Inc. v. Dukes, a class action employment discrimination suit that the Supreme Court reviewed last Term. With over 1.5 million women potentially involved in the litigation, the notion of holding individual trials is fanciful. Other recent examples of the phenomenon include the In re World Trade Center Disaster Site Litigation and the fraud litigation against light cigarette manufacturers, in which Judge Weinstein colorfully noted that any “individualized process . . . would have to continue beyond all lives in being.”

Faced with an unserviceable number of plaintiffs, courts have proposed sampling trials: rather than litigating every case, courts would litigate a small subset and award the remaining plaintiffs statistically determined amounts based on the results. But while sampling is standard statistical practice and often accepted as evidence in other legal contexts, appellate courts have balked— based on due process concerns—at the notion of court-mandated, binding trial sampling.

Despite this appellate reluctance, the controversy continues unabated. Trial courts have soldiered on by using nonbinding sampled trials (dubbed “bellwether trials”) to induce settlement, and a few brave appellate courts, including the Ninth Circuit in Dukes, have even hinted at an increased receptivity to sampling. Given that trial courts have few practical alternatives, one wonders if it is just a matter of time before their appellate brethren recognize the necessity of sampling.

And here's the SSRN abstract:

In many mass tort cases, separately trying all individual claims is impractical, and thus a number of trial courts and commentators have explored the use of statistical sampling as a way of efficiently processing claims. Most discussions on the topic, however, implicitly assume that sampling is a “second best” solution: individual trials are preferred for accuracy, and sampling only justified under extraordinary circumstances. This Essay explores whether this assumption is really true. While intuitively one might think that individual trials would be more accurate at estimating liability than extrapolating from a subset of cases, the Essay offers three ways in which the “second best” assumption can be wrong. Under the right conditions, sampling can actually produce more accurate outcomes than individualized adjudication. Specifically, sampling’s advantages in averaging (reducing variability), shrinkage (borrowing strength across cases), and information gathering (through nonrandom sampling), can result in some instances in which ten trials are better than a thousand.