ALEX STEIN, PROFESSOR OF LAW

The individual plaintiff plays a critical—yet, underappreciated—role in our legal system. Only lawsuits that are brought by individual plaintiffs allow the law to achieve the twin goals of efficiency and fairness. The ability of individual plaintiffs to seek justice against those who wronged them deters wrongdoing, ex ante, and in those cases in which a wrong has been committed nevertheless, it guarantees the payment of compensation, ex post. No other form of litigation, including class actions and criminal prosecutions, or even compensation funds, can accomplish the same result. Yet, as we show in this Essay, in many key sectors of our economy, suits by individual plaintiffs have become a rare phenomenon, if not virtual impossibility. The architecture of liability, by making causes of action more complex and difficult to prove, while equipping defendants with multiple defenses, coupled with the fact that large corporate defendants enjoy a vast cost advantage over individual plaintiffs on account of superior legal expertise and economies of scale and scope, make it nearly impossible for individual plaintiffs to prevail in court, or even get there. This problem pervades many industries, but, for the reasons we detail, it is particularly acute in the insurance, healthcare, medical, and consumer finance sectors.

To address this growing problem, we propose a full-fledged legal reform that encompasses substantive, procedural, evidentiary, and remedial measures. Substantively, we explain how civil liability should be redesigned to give a fairer chance to individual plaintiffs. Specifically, we call for the simplification of causes of action and the elimination of cumbersome elements that doom many individual lawsuits. Procedurally, we propose a fast-track litigation course that would enable courts to resolve disputes expeditiously. As we show, the introduction of this new procedure would deprive corporate defendants of one of their most critical advantages: the ability to extend litigation over long periods of time and make it more costly than it should. Evidentially, we recommend that lawmakers shift the burden of proof of certain disputed elements from plaintiffs to defendants and explain how this could be done. Finally, as far as remedies are concerned, we make a case for a new preliminary remedy—a partial payment order—define the conditions under which it should be awarded, and argue for a more extensive use of statutory damages and damage multipliers. Implementing our proposed reform will go a long way toward restoring the pride of place individual plaintiffs traditionally held in our legal system.

The “watershed” doctrine gives prisoners a constitutional basis to reopen their cases based on a new due process protection that would have made a difference had it been announced before their appeals were exhausted. The Supreme Court has imposed nearly impossible conditions, however, for any new rule of criminal procedure to apply retroactively to a final conviction or sentence. No such rule can be backdated unless it enhances not only the accuracy of criminal verdicts, but also “our very understanding of the bedrock” tenets of fairness in criminal trials. The Court refers to rules that satisfy both these requirements as “watersheds.” In the quarter-century since it established this doctrine, the Court has denied the accuracy-and-fairness credentials to every one of the dozens of new rules it has characterized as procedural and whose watershed status it has considered. Scholarly consensus accordingly casts watershed doctrine as exceptional, esoteric, and insignificant.

This Article challenges that consensus. We use the dynamic concentration model of game theory to show how watershed doctrine counteracts the structural undersupply of constitutional due process rules. The Court maintains too small a caseload to scrutinize more than a fraction of due process violations or specify every such procedural demand. That institution is accordingly ill equipped to rein in the punitive tendencies of elected state judges who owe their jobs to electorates that tend to value crime prevention more than defendants’ rights. Watershed doctrine potentially mitigates this enforcement problem by creating an extreme, if low-probability, threat of repealing scores of final convictions. By issuing a single new watershed rule, the Court can mandate sweeping retrials or release of prisoners into the public. This existential threat provides an overlooked reason why state courts might insulate their states’ criminal procedures against Supreme Court incursions. To achieve the desired insulation, state courts can create constitutional safe harbors by trying to align their procedures with watersheds they project the Court might announce in the future. Indirect support for this theory comes from our comprehensive study of the hundreds of watershed decisions that state courts have issued since 1989. We narrowed this list down to the 228 controlling decisions about whether to backdate distinct due process rules across different jurisdictions. Our analysis found that 27, or more than one in nine, of these decisions inflate the retroactivity rights of criminal defendants.

Why set up evidentiary rules rather than allow factfinders to make decisions by considering all relevant evidence? This fundamental question has been the subject of unresolved controversy among scholars and policymakers since it was raised by Bentham at the beginning of the nineteenth century. This Article offers a surprisingly straightforward answer: An economically minded legal system must suppress all evidence that brings along a negative productivity-expense balance and is therefore inefficient. Failure to suppress inefficient evidence will result in serious diseconomies of scale.

To operationalize this idea, I introduce a “signal-to-noise” method borrowed from statistics, science, and engineering. This method focuses on the range of probabilities to which evidence falling into a specified category gives rise. Specifically, it compares the average probability associated with the given evidence (the “signal”) with the margins on both sides (the “noise”). This comparison allows policymakers to determine the signal-to-noise ratio (SNR) for different categories of evidence. When the evidence’s signal overpowers the noise, the legal system should admit the evidence. Conversely, when the noise emanating from the evidence drowns the signal, the evidence is inefficient and should therefore be excluded. I call this set of rules “the SNR principle.” Descriptively, I demonstrate that this principle best explains the rules of admissibility and corroboration by which our system selects evidence for trials. Prescriptively, I argue that the SNR principle should guide the rules of evidence-selection and determine the scope of criminal defendants’ constitutional right to compulsory process.

It is a virtual axiom in the world of law that legal norms come in two prototypes: rules and standards. The accepted lore suggests that rules should be formulated to regulate recurrent and frequent behaviors, whose contours can be defined with sufficient precision. Standards, by contrast, should be employed to address complex, variegated, behaviors that require the weighing of multiple variables. Rules rely on an ex ante perspective and are therefore considered the domain of the legislator; standards embody a preference for ex post, ad-hoc, analysis and are therefore considered the domain of courts. The rules/standards dichotomy has become a staple in economic analysis of the law, as well as in legal theory in general.

The Essay seeks to contribute to the jurisprudential literature by unveiling a new form of legal command: the catalog. A catalog, as we define it, is a legal command comprising a specific enumeration of behaviors, prohibitions, or items that share a salient common denominator and a residual category—often denoted by the words “and the like” or “such as”—that empowers courts to add other unenumerated instances. We demonstrate that the catalog formation is often socially preferable to both rules and standards and can better enhance the foundational values of the legal system. In particular, catalogs are capable of providing certainty to actors at a lower cost than rules, while avoiding the costs of inconsistency and abuse of discretion inimical to standards. Moreover, the use of catalogs leads to a better institutional balance of powers between the legislator and the courts by preserving the integrity and autonomy of both institutions. We show that these results hold in a variety of legal contexts, including bankruptcy, torts, criminal law, intellectual property, constitutional law, and tax law—all discussed throughout the Essay.

Our civil liability system affords numerous defenses against every single violation of the law. Against every single claim raised by the plaintiff, the defendant can assert two or more defenses each of which gives him an opportunity to win the case. As a result, when a court erroneously strikes out a meritorious defense, it might still keep the defendant out of harm’s way by granting him another defense. Rightful plaintiffs, on the other hand, must convince the court to deny each and every defense asserted by the defendant. Any rate of adjudicative errors—random and completely unbiased—consequently increases the prospect of losing the case for meritorious plaintiffs while decreasing it for defendants. This pro-defendant bias forces plaintiffs to settle suits below their expected value. Worse yet, defendants can unilaterally reduce the suit’s expected value and extort a cheap settlement from the plaintiff through a strategic addition of defenses. We uncover and analyze this problem and its distortionary effect on settlements and primary behavior. Subsequently, we develop three alternative solutions to the problem and evaluate their pros and cons.

Intellectual Property DefensesIn this Article, we offer an integrated theory of intellectual property defenses. The Article demonstrates that all intellectual property defenses can be fitted into three conceptual categories: general, individualized and class defenses. A general defense is the inverse of a right in rem. It goes to the validity of the intellectual property right asserted by the plaintiff, and when raised successfully it relieves not only the actual defendant, but also the public at large, of the duty to comply with the plaintiff’s asserted intellectual property right. An individualized defense, as we define it, is the inverse of an in personam right: it helps a defendant who raises it to fend off the infringement claim against her, but leaves the plaintiff’s right intact, and hence allows the plaintiff to assert it against other defendants. Class defenses form an in-between category. They can be analogized to inverse quasi-property rights in that they create an immunity zone for a certain group of users. However, class defenses do not act to invalidate the right of the plaintiff, and thus the benefit to the public from a successful showing of a class defense is more limited than that arising from the vindication of general defenses.

Based on this taxonomy, the Article shows that society has a special interest in the successful raising of class and especially general defenses, as those defenses help clear the path of invalid intellectual property rights and thereby facilitate future innovation, creativity and competition. Yet, because defendants do not capture the full social benefit associated with class and general defenses, they will not invest the socially optimal level of resources in raising and litigating such defenses. As a result, some defendants will be defeated in court, while others will agree to a settlement that will keep general and class defenses unrealized to society’s detriment. This problem is exacerbated by the fact that intellectual property owners will tend to target defendants who have no wherewithal to litigate.

To remedy this problem, the Article proposes a procedural solution designed to achieve a better alignment between the private interest of intellectual property defendants and that of society at large: voluntary joinder of defendants. This solution would allow defendants who raise class or general defenses to implead other potential defendants. Impleaded parties would be at liberty to decline the invitation to join. However, declining the invitation would subject the impleaded parties to one of two mechanisms: We dub the first the “preclusion mechanism” and the second the “restitution mechanism.” Under the “preclusion mechanism,” impleaded defendants would be able to opt out instead of joining in, but if the class or general defense asserted by the actual defendant failed, they would be precluded from asserting a general or class defense in their future litigation against the same plaintiff. The impleaded defendants would thus be incentivized to form an alliance that would face the infringement suit together. Under the “restitution mechanism”— which the Article ultimately endorses—impleaded parties who chose to opt out would not partake of the litigation, but should the actual defendant(s) successfully raise a class or general defense, they would have to pay their fair share of the cost of raising the defense as they, too, benefit from the effort.

Behavioral ProbabilityDaniel Kahneman’s recent book, Thinking, Fast and Slow, is a must-read for any scholar and policymaker interested in behavioral economics. Thus far, behavioral economists did predominantly experimental work that uncovered discrete manifestations of people’s bounded rationality: representativeness, availability, anchoring, overoptimism, base-rate neglect, hindsight bias, loss aversion, and other misevaluations of probability and utility. This work has developed no causal explanations for these misevaluations. Kahneman’s book takes the discipline to a different level by developing an integrated theory of bounded rationality’s causes and characteristics. This theory holds that humans use two distinct modes of reasoning, intuitive (System 1) and deliberative (System 2), while systematically allowing their fast intuitions to supersede deliberation. These intuitions grow from familiar stereotypes, resemblances and emotions that oftentimes do not align with reality and take the person astray. The dominance of these untutored, and yet powerful, intuitions explains people’s various misconceptions in the domains of probability and utility.

The book is beautifully written: its insights are rich, profound, and at the same time lucid and exceptionally well presented. The book develops clear explanations for complex economic and psychological phenomena and accompanies them with brilliantly selected examples from real life and controlled experiments.

These virtues, however, do not make the book uncontroversial. My Review critically examines Kahneman’s account of people’s probabilistic irrationality that occupies the majority of the book. This examination unfolds my profound disagreement with Kahneman’s grim assessment of ordinary people’s reasoning, widely known as the bounded-rationality theory.

Specifically, I argue that Kahneman and his collaborators have used an incomplete and unstable set of criteria for appraising people’s determinations of probability. This set of criteria is incomplete for two reasons. First, it does not account for the divide between rule-free ‘beliefs’ and rule-based ‘acceptances,’ drawn by philosophers of rationality. Acceptance is a mentally active process that includes application of decisional rules to available information. Belief, by contrast, is a person’s feeling, sensation or hunch: an intellectually passive state of mind generated by unanalyzed experiences. People participating in Kahneman’s and other behavioral economists’ experiments have formed no rule-driven acceptances. All they did was to report their beliefs because this is what the experimenters asked them to do. Individuals’ rationality, however, properly manifests itself only in acceptances that apply rules of reasoning. A person who fails to align her intuitive beliefs with the rules of probability may well be – and oftentimes is – perfectly rational in her rule-driven decisions.

Kahneman’s criteria for probabilistic rationality also fail to recognize a distinct – and perfectly rational – framework of reasoning, known as causative or Baconian. Under this framework, an event’s probability corresponds to the quantum and variety of the evidence that confirms the event’s occurrence while eliminating rival scenarios. This qualitative evidential criterion separates causative probability from the mathematical calculus of chances. Individuals who apply this criterion in making case-specific decisions are far from being irrational.

Kahneman’s criteria for probabilistic decisions are unstable because they tolerate the presence of unspecified causality and malleable reference classes in experimental settings and provide no directives on how to deal with these problems. These problems are present in all of Kahneman’s experiments that mixed statistical data with case-specific events. As a result, these experiments do not reveal anything about the rationality of participants’ probabilistic decisions. A person who ascribes probability to a causally unspecified event featuring a malleable reference class can never go wrong: her guesswork is as good as Kahneman’s or anyone else’s.

Based on this analysis, I revisit the flagship experiments substantiating Kahneman’s theory of bounded probabilistic rationality (‘The Blue Cab,’ ‘Feminist Bank Teller,’ and ‘The Librarian’). Contrary to Kahneman’s assessment, I demonstrate that these experiments reveal no irrationalities in the participants’ probabilistic decisions. Moreover, I show that the rationality criteria favored by Kahneman and other behavioral economists can set up the ‘irrationality maze’ - a situation in which all decisions available to a person are irrational.

I argue, in conclusion, that the ‘bounded probabilistic rationality’ theory is unproven. Policymakers therefore ought to put on hold the behavioral economists’ recommendations urging the government to step in and fix people’s probabilistic decisions. This paternalism, ‘soft’ and invasive alike, is wasteful and potentially pernicious. Implementing it will suppress individuals’ creativity and heterogeneity without introducing substantial improvements in their decisions.

The Relational Contingency of RightsIn this Article, we demonstrate, contrary to conventional wisdom, that all rights are relationally contingent. Our main thesis is that rights afford their holders meaningful protection only against challengers who face higher litigation costs than the rightholder. Contrariwise, challengers who can litigate more cheaply than a rightholder can force the rightholder to forfeit the right and thereby render the right ineffective. Consequently, in the real world, rights avail only against certain challengers but not others. This result is robust and pervasive. Furthermore, it obtains irrespectively of how rights and other legal entitlements are defined by the legislator or construed by courts. We also show that in many legal areas, such as property law, intellectual property law, insurance law, and criminal law, rightsholders systematically suffer from cost disadvantage vis-à-vis certain categories of challengers who can render their rights virtually unrealizable. After uncovering these problems and analyzing their implications for prevalent understandings of rights in the jurisprudential and economic literatures, we identify mechanisms that our legal system ought to adopt to fend off the threat to the integrity of its rights-based design and bolster the protection afforded by rights. These mechanisms include heightened court fees, fee shifting, punitive damages, and various procedural safeguards. We submit that under the appropriate design, they can go a long way toward countering the strategic abuse of rights.

The Distortionary Effect of Evidence on Primary BehaviorIn this Essay, we analyze how evidentiary concerns dominate actors’ behavior. Our findings offer an important refinement to the conventional wisdom in law and economics literature, which assumes that legal rules can always be fashioned to achieve socially optimal outcomes. We show that evidentiary motivations will often lead actors to engage in socially suboptimal behavior when doing so is likely to increase their likelihood of prevailing in court. Because adjudicators must base decisions on observable and verifiable information—or, in short, evidence—rational actors will always strive to generate evidence that can later be presented in court and increase their chances of winning the case regardless of the cost they impose on third parties and society at large. Accordingly, doctors and medical institutions will often refer patients to undertake unnecessary and even harmful examinations just to create a record that they went beyond the call of duty in treating them. Owners of land and intellectual property may let harmful activities continue much longer than necessary just to gather stronger evidence concerning the harm they suffer. And even the police will often choose to allow offenders to carry out crimes in order to improve the chance of a conviction. The effect we identify is pervasive. It can be found in virtually all areas of the law. Furthermore, there is no easy way to eliminate or correct it. It should be noted, however, that the evidentiary phenomenon we discuss also has a positive side effect: it reduces adjudication costs for judges and juries and improves the accuracy of court processes. In some cases, this improvement will exceed the social cost stemming from actors’ suboptimal behavior. In other contexts, however, the social cost will far outweigh the benefit.

The Flawed Probabilistic Foundation of Law & EconomicsThis Article challenges the mathematical probability system that underlies law and economics and behavioral analysis and argues that many of the core insights of both approaches are irremediably flawed. The Article demonstrates that mathematical probability is only suitable for pure gambles and hence does not provide a useful epistemic tool for analyzing individual decisionmaking. As a result, mathematical probability cannot serve as a useful tool for lawmakers. Mathematical probability, the Article proposes, ought to be replaced with causative probability—a system of reasoning compatible with the causal structure of people’s physical, social and legal environments. Originating from the writings of John Stuart Mill and Francis Bacon, causative probability differs from its mathematical cousin both conceptually and substantively. By contrast to the mathematical system that bases probability estimates on abstract averages, the causative system bases probability estimates upon case-specific evidential variety. Under the causative system, the probability that a person’s action will bring about a particular consequence—gain or loss—is determined by the number and scope of the consequence’s evidential confirmations in the individual case, and not by general averages that are usually irrelevant to the individual determination at hand. Causative probability allows a person to develop a better epistemic grasp of her individual case relative to what she could achieve under the mathematical system. This epistemological advantage turns causative probability into a superior tool for understanding how legal mechanisms work, for improving those mechanisms, and for defining the rationality of individuals’ decisions.

The Flawed Probabilistic Foundation of Law & Economics, 105 Northwestern University Law Review 199-260 (2011) download

Strategic EnforcementDoctrine and scholarship recognize two basic models of enforcing the law: the comprehensive model, under which law-enforcers try to apprehend and punish every violator within the bounds of feasibility; and the randomized model, under which law enforcers economize their efforts by apprehending a small number of violators and heightening their penalties so as to make violations unattractive. This Article supplements this list of options by developing a strategic model of law enforcement. Under this model, law enforcers concentrate their effort on the worst, or most rampant, violators at a given point in time while leaving all others unpunished. This enforcement strategy will force violators into a cascaded retreat: to avoid detection as one of the worst violators, every individual wrongdoer will bring the level of his unlawful activity down to the point of inconspicuousness—a process that will repeat itself several times to society’s benefit. This Article identifies the circumstances that call for the strategic model’s adoption and illustrates the model’s potential as an enforcement tool in diverse areas of the law that include employment discrimination, election districting, and copyright protection.

ORIGINALITYIn this Essay we introduce a model of copyright law that calibrates authors’ rights and liabilities to the level of originality in their works. We advocate this model as a substitute for the extant regime that unjustly and inefficiently grants equal protection to all works satisfying the “modicum of creativity” standard. Under our model, highly original works will receive enhanced protection and their authors will also be sheltered from suits by owners of preexisting works. Conversely, authors of less original works will receive diminished protection and incur greater exposure to copyright liability. We operationalize this proposal by designing separate rules for highly original works, for works exhibiting average originality, and for works that are minimally original or unoriginal. We illustrate our rules’ application by showing how they could have altered court decisions in classic copyright cases in a socially beneficial way.

This Essay addresses an anomaly in trespass law. Trespass law is generally understood as the paradigmatic example of property-rule protection: an owner can obtain an injunction against the trespasser and have him removed from her land. The property-rule protection enjoyed by the owner protects her right to exclude others and to set the price for the use of her property. However, the property-rule protection only exists ex ante: it avails only against imminent or ongoing trespasses. Ex post, after a trespass ends, the owner can only recover compensation measured by the market value of the unauthorized use, i.e., the going rent. This liability-rule compensation dilutes the ex ante property-rule protection of ownership. Effectively, it grants trespassers a call option on others’ property, creating a mismatch between rights and remedies.

To remedy this mismatch, we introduce the concept of “propertized compensation”—a damage measure that sets compensation equal to the owner’s pre-trespass asking price. We contend that propertized compensation should become the primary remedial option in trespass cases. The use of this measure will reinstate the owner’s position as a price maker, entitling her to recover the amount that she would have agreed to accept ex ante in a voluntary exchange. We further argue that owners who cannot produce evidence regarding their pre-trespass asking price (as well as owners who prefer not to seek propertized compensation) should be entitled to seek disgorgement of the trespasser’s profits. Finally, we claim, contra the extant regime, that market-value compensation should only be used in the exceptional cases of trespass by necessity, media trespass, and good faith encroachments. In all other cases, it should only be awarded if the owners specifically ask for it.

TORTS AND INNOVATIONThis Essay exposes and analyzes a hitherto overlooked cost of the current design of tort law: its adverse effect on innovation. Tort liability for negligence, defective products, and medical malpractice is determined by reference to custom. We demonstrate that courts’ reliance on custom and conventional technologies as the benchmark of liability chills innovation and distorts its path. Specifically, the recourse to custom taxes innovators and subsidizes replicators of conventional technologies. We explore the causes and consequences of this phenomenon and propose two possible ways to modify tort law in order to make it more welcoming of innovation.

THE ANTI-POOLING JUSTIFICATION OF THE FIFTH AMENDMENT PRIVILEGE AGAINST SELF-INCRIMINATION (THE RIGHT TO SILENCE)The right to silence has a solid consequentialist justification. The conventional perception of this right -- that it impedes the search for truth and thus helps only criminals -- is mistaken. The right to silence helps triers of fact to distinguish between innocent and guilty defendants. A guilty suspect's self-interested response to questioning can impose externalities, in the form of wrongful conviction, on innocent suspects and defendants who tell the truth but cannot corroborate their stories. Absent the right to silence, guilty suspects and defendants would make false exculpatory statements if they believed that their lies were unlikely to be exposed. Aware of these incentives, triers of fact would rationally discount the probative value of uncorroborated exculpatory statements at the expense of innocent defendants who could not corroborate their true exculpatory statements. Because the right to silence is available, innocent defendants tell the truth while guilty defendants rationally exercise the right when they fear that lying is exceedingly risky. Thus, guilty defendants do not pool with innocents by lying; and as a result, triers of fact do not wrongfully convict innocent defendants.

The Right to Silence Helps the Innocent: A Game-Theoretic Analysis of the Fifth Amendment Privilege, 114 Harvard Law Review 430 (2000) (with Daniel Seidmann) download

THE OVERENFORCEMENT PARADIGMOverenforcement of the law occurs when the total sanction suffered by the violator of a legal rule exceeds the amount optimal for deterrence. Overenforcement sometimes generates overdeterrence that cannot be remedied through the adjustment of substantive liability standards or penalties in light of operational and expressive constraints. When that happens, the legal system can counteract the effects of overenforcement by adjusting evidentiary or procedural rules to make liability less likely (ex ante). This framework -- the overenforcement paradigm -- illuminates previously unnoticed features of various evidentiary and procedural arrangements. It also provides a useful analytical and prescriptive tool for creating balanced incentives in cases in which overenforcement is present.

MEDIATING RULES IN CRIMINAL LAWThis article challenges the conventional divide between substantive criminal law theory, on the one hand, and evidence law, on the other, by exposing an important and unrecognized function of evidence rules in criminal law. Throughout the criminal law, special rules of evidence work to mediate conflicts between criminal law’s deterrence and retributivist goals. They do this by skewing errors in the actual application of the substantive criminal law to favor whichever theory has been disfavored by the substantive rule itself. The mediating potential of evidentiary rules is particularly strong in criminal law because the substantive law’s dominant animating theories – deterrence and retributivism – respond asymmetrically to the workings of those rules. We analyze the features of “mediating rules,” explore their effects across a range of substantive areas, and offer a tentative normative assessment of their role in the criminal law system.

Ambiguity aversion is a person's rational attitude towards probability's indeterminacy. When a person is averse towards such ambiguities, he increases the probability of the unfavorable outcome to reflect that fear. This observation is particularly true about a criminal defendant who faces a jury trial. Neither the defendant nor the prosecution knows whether the jury will convict the defendant. Their best estimation relies on a highly generalized probability that attaches to a broad category of similar cases. The prosecution, as a repeat player, is predominantly interested in the conviction rate that it achieves over a long series of cases. It therefore can depend on this general probability as an adequate predictor of this rate. The defendant only cares about his individual case and cannot depend on this general probability. From the defendant's perspective, his individual probability of conviction is ambiguous. The defendant consequently increases this probability to reflect his fear of that ambiguity. Because most defendants are ambiguity-averse, while the prosecution is not, the criminal process systematically involves and is thoroughly affected by asymmetric ambiguity-aversion.

Asymmetric ambiguity-aversion foils criminal justice. The prosecution can exploit it by forcing defendants into plea bargains that are both inefficient and unfair. Because plea bargain is a predominant method of case-disposition across the United States, this exploitation opportunity is particularly pernicious. The legal system ought to eliminate it.

Two Fifth Amendment doctrines -- the rule against double jeopardy and the grand jury review of indictments -- have the effect of mitigating this problem. The rule against double jeopardy sets a pro-defendant system of asymmetric rights to appeal. This system reduces the probability of conviction for all defendants, regardless of the merits. This probability reduction offsets -- but not eliminates -- the upward adjustment that an ambiguity-averse defendant introduces into his probability of conviction. Grand jury review disambiguates the defendant's probability of conviction when he is informed about the grand jurors' voting score. This disambiguation is only partial, though, because grand jurors are authorized to indict upon mere showing of a "probable cause."

The prevalent constitutional doctrine should therefore be modified by giving a defendant the right to choose between a bench trial and a trial by jury. Judges are repeat institutional players that credibly commit themselves to reasons for decisions that are evenhanded, known and institutionally approved. This commitment is induced not only by the judges' fear of reversal and other career-related repercussions, but also by the defendant's constitutional entitlement to a trial by jury. For judges, jury trial is a time-consuming and effort-intensive process with virtually no career-enhancing returns. Judges therefore strongly prefer a bench trial over a trial by jury. To actualize this preference, judges need systematically to deliver evenhanded decisions that follow the institutionally approved reasons. This makes judges' decisions predictable. The defendant's probability of being convicted by a judge in a bench trial thus becomes unambiguous, which neutralizes the prosecution's ambiguity-exploiting pressure in plea bargaining.

Empirical data confirm these findings. Specifically, they identify three major trends. First, bench trials are prevalent in jurisdictions featuring high trial rates, generated by a non-meticulous selection of cases for prosecution. Second, the rate of acquittals in bench trials is much higher than in trials by jury. The defendants' ambiguity-aversion is the most plausible explanation of these trends. Defendants with real prospects for acquittal have much to lose and are therefore unwilling to depend upon unpredictable juries. Finally, there is a demand for jury-consulting services and no discernible market for judge-consulting services. Litigants are willing to pay for information predicting the outcomes of jury trials and are generally unwilling to pay for information predicting judges' decisions in bench trials. This leads to the conclusion that ambiguity aversion is particularly problematic in trials by jury.

DISTINGUISHING BETWEEN THE (EX ANTE) RISK OF INJURY AND THE (EX POST) PROBABILITY OF CAUSATION

The following example is paradigmatic:

The claimant required urgent surgery, which -- if performed properly and on time -- would have given her a 75% chance of recovery. The doctors negligently delayed the surgery. The delayed surgery was performed impeccably, but it promised the claimant only a 25% chance of recovery. Ultimately, the claimant did not recover.

The claimant cannot prove causation and attempts to recover compensation under the lost-chances doctrine. Under this doctrine, courts uniformly award the claimant 50% of her total damage.

This approach is wrong: the claimant should recover 2/3 of her damage, not just 50%.

See, for example, Mays v. United States, 608 F. Supp. 1476 (D.C. Colo. 1985) (upon finding that medical malpractice reduced the patient's chances of recovery from 40% to 15%, the court reasoned that the damage was 25% of the patient's total loss); Herskovits v. Group Health Cooperative of Puget Sound 664 P.2d 474 (Wash. 1983) (holding a 14% reduction, from 39% to 25%, in the decedent's chance for survival as sufficient evidence to allow the case to go to the jury); Alberts v. Schultz, 975 P.2d 1279, 1287 (N.M. 1999) (holding that if medical malpractice reduced the patient's chance of survival from 50% to 20%, that patient's compensation would be 30% of the value of his or her life); Jorgenson v. Vener, 616 N.W.2d 366, 372 (S.D. 2000) (if instead of completely eliminating the chance of recovery, the physician's negligence merely reduced the chance of recovery from 40% to 20%, then the value of the lost chance would be 20% of the value of a complete recovery).

For reasons provided below, the claimant should have recovered 29% of the damage in Mays; 19% of the damage in Herskovits; 37.5% of the damage in the Alberts example; and 25% of the damage in the Jorgenson example.

Take a person who sustains injury after being wrongfully exposed to a risk of sustaining that injury. Before the wrongdoing, this victim's probability of sustaining the injury equaled 1-p, which is parallel to her probability of remaining uninjured (p). After the wrongdoing, the victim's probability of sustaining the injury became 1-q, which is parallel to her probability of escaping the injury (q). Because the victim actually sustained the injury, her case falls into the 1-q category. This statistical category comprises two jointly exhaustive and mutually exclusive scenarios that reflect the victim's initial position. In the first scenario, the victim sustains the injury irrespective of the wrongdoing. Under this scenario, the victim was doomed to sustain the injury, so that the wrongdoing made no impact on her well-being. As already indicated, the probability of that scenario equals 1-p. In the second scenario, it is the wrongdoing that causes the victim's injury. Under this scenario, the victim would have remained uninjured had she not been exposed to the wrongdoing. The probability of this scenario equals (1-q)-(1-p), that is, p-q. This ex ante probability represents the reduction in the victim's chances of remaining uninjured, as effected by the wrongdoing.

Now consider the ex post probability of the scenario that the wrongdoing was the actual cause of the victim's injury. This probability is represented by the fraction of scenarios featuring a victim who could not sustain her injury without being subjected to a wrongdoing in the more general cluster of cases that feature an injured victim, a wrongdoing, and the exhaustive variety of causal factors that could inflict the same injury on the victim. The above fraction of scenarios equals p-q. The cluster of cases covering all possible scenarios equals 1-q. The ex post probability of the scenario in which the wrongdoing actually inflicts the victim's injury therefore equals (p-q)/(1-q).

As already mentioned, the victim's (ex ante) risk of sustaining injury as a result of the wrongdoing equals p-q. Consequently, in cases in which the victim actually sustains injury, the (ex post) probability of causation -- that is, the probability of the allegation that factually attributes the injury to the defendant's wrongdoing -- would generally be higher than the (ex ante) risk of injury. This would be so because, on numerous occasions, a wrongdoing increases the victim's probability of becoming injured without transforming this prospect into empirical reality. In any such case, since the wrongdoing still leaves the victim with chances of escaping the injury, 0<q<1. hence="" i="">p-q)/(1-q)>p-q. The two probabilities would be equal only when q=0, that is, when the wrongdoing totally eliminates the victim's chances of escaping the injury. In q=0 cases, the risk of injury and the probability of causation would concur and would equal p.

The courts' approach is detrimental to society.

Using the same notation, let p and q denote, respectively, the victim's chances of remaining uninjured before and after the wrongdoing. Allow D to denote the average amount of damage that the wrongdoing inflicts in the long run of cases, and let T denote the total number of cases in which the risky activity takes place. The ideal compensation that the legal system should exact from the wrongdoer would thus equal (p-q)DT.

In reality, however, only injured victims can successfully sue the wrongdoer.

Therefore, the number of cases in which the wrongdoer would have to pay compensation would equal (1-q)T. The wrongdoer's compensation duty would thus be below the optimal. Using the probability of causation as an award-multiplier would eliminate this shortfall. As already established, the probability of causation equals (p-q)/(1-q). The total amount of the wrongdoer's compensation duty would consequently be [(p-q)/(1-q)]DT(1-q), that is: (p-q)DT.

This compensation duty equals the losses inflicted by the wrongdoer. It would therefore optimally deter prospective wrongdoers (and would also promote corrective justice).