Pages

Tuesday, October 4, 2016

A Comprehensive Study of Trade Secret Damages

Posted by
Michael Risch

Elizabeth Rowe (Florida) has shared a draft of "Unpacking Trade Secret Damages" on SSRN. The paper is an ambitious one, examining all of the federal trade secret verdicts she could find (which she believes is a reasonably complete set based on her methods) that issued between 2000 and 2014. The abstract is:

This study is the first to conduct an in-depth empirical analysis of damages in trade secret cases in the U.S. From an original data set of cases in federal courts from 2000 to 2014, I assess the damages awarded on trade secret claims. In addition, a wide range of other variables are incorporated into the analysis, including those related to background court and jurisdiction information, the kinds of trade secrets at issue, background details about the parties, the related causes of action included with claims of trade secret misappropriation, and details about the damages awarded.

Analysis of this data and the relationship between and among the variables yields insightful observations and answers fundamental questions about the patterns and the nature of damages in trade secret misappropriation cases. For instance, I find average trade secret damage awards comparable to those in patent cases and much larger than trademark cases, very positive overall outcomes for plaintiffs, and higher damages on business information than other types of trade secrets. The results make significant contributions in providing deeper context and understanding for trade secret litigation and IP litigation generally, especially now that we enter a new era of trade secret litigation in federal courts under the Defend Trade Secrets Act of 2016.

I think this study has a lot to offer. Although it doesn't include state court cases, it provides a detailed look at trade secret cases in the first part of this century. Of course, the verdicts, which were about 6% of all trade secret cases filed, are subject to the same selection effects as any other verdict analysis - there is a whole array of cases (more than 2000 of them in the federal system alone) that never made it this far, and we don't know what the tried cases tells us about the shorter-lived cases.

The study offers a lot of details: amounts of awards, states with the highest awards, states with the most litigation, judge v. jury, attorneys' fees, punitive damages, the effect of NDAs on damages, etc. It goes a step further and offers information about the types of information at issue, and even the types of information that garner different sizes of awards. It's really useful information, and I recommend this study to anyone interested in the state of trade secret litigation today.

There are, however, a couple ways I think the information could have been presented differently. First, the study has some percentile information which was great, but most of it focuses on averages. This is a concern because the data is highly skewed; one nearly billion dollar verdict drives much of the relevant totals. Thus, it is difficult to get a real sense for how the verdicts look and there is no standard deviation reported.

Of course, the median award according to the paper is zero, so reporting medians is a problem. I particularly liked the percentile table and discussion, and I wonder whether a 25/50/75 presentation would work. Speaking of zero dollar awards, though, I thought the paper could be improved by clarifying what is calculated in the average. Is it the average of all verdicts? All verdicts where the plaintiff wins? All non-zero verdicts? Related to this, I thought that clearly disaggregating defendant verdicts would be helpful. The paper reports how many plaintiffs won, but this is not reflected in either the median or average award data (that I can tell - only total cases are reported). At one point the paper discusses the average verdict for defendants (more than $800,000) which is confusing since defendants shouldn't win any damages. Are these part of the averages? Are they calculated as a negative value? If these are fee awards, they should be reported separately, I would think.

Though I would like more data resolution, I should note that this really is just a presentation issue. The hard part is done, and the data is clearly available to slice and dice in a variety of ways, and I look forward to further reporting of it.