Pages

Tuesday, September 26, 2017

As I mention in my forthcoming book chapter on empirical methods in trade secret research, there's really a dearth of good empirical scholarship about the role of trade secrets in the economy. One scholar who has written several articles in this area is my Ivan Png from the National University of Singapore. Professor Png exploits the variation in strength of trade secret protection to find causal effects on, say, innovation or worker mobility.

His latest article, called Secrecy and Patents: Theory and Evidence from the Uniform Trade Secrets Act (SSRN draft here, Final paywall version here), examines how rates of patenting change when levels of protection for trade secrets change. Here is the abstract, which shares some of findings:

How should firms use patents and secrecy as appropriability mechanisms? Consider technologies that differ in the likelihood of being invented around or reverse engineered. Here, I develop the profit-maximizing strategy: (i) on the internal margin, the marginal patent balances appropriability relative to cost of patents vis-a-vis secrecy, and (ii) on the external margin, commercialize products that yield non-negative profit. To test the theory, I exploit staggered enactment of the Uniform Trade Secrets Act (UTSA), using other uniform laws as instruments. The Act was associated with 38.6% fewer patents after one year, and smaller effects in later years. The Act was associated with larger effect on companies that earned higher margins, spent more on R&D, and faced weaker enforcement of covenants not to compete. The empirical findings are consistent with businesses actively choosing between patent and secrecy as appropriability mechanisms, and appropriability affecting the number of products commercialized.

Frankly, I think that the abstract undersells the findings a bit, as it seems targeted to the journal, "Strategy Science." The paper takes a much broader view of his model: "If trade secrets law is stronger in the sense of reducing the likelihood of reverse engineering, then businesses should adjust by (i) patenting fewer technologies and keeping more of them secret, and (ii) commercializing more products."

Like Png's other work in this area, the core of the analysis begins with an index of trade secret strength in each state, based on passage of the UTSA and variations of each state's implementation of UTSA (e.g. with respect to inevitable disclosure). In this paper, Png then obtained data about the location of company R&D facilities and patents coming out of those facilities. He also used other uniform laws passed at around the same time as an instrument, to make sure that the UTSA is not endogenous with patenting.

This is a really interesting and important paper, even if it validates what most folks probably assumed (dating back to the days of Kewanee v. Bicron): if you strengthen secrecy, there will be fewer patents. That said, there is a lot going on in this paper, and a lot of assumptions in the modeling. First and foremost, the levels of protection of trade secrets don't have many degrees of freedom. I much prefer the categories created by Lippoldt and Schultz. That said, even a binary variable might be sufficient. Second, the model and estimation are based on the assumption that the marginal patent is the one most likely to be designed around, and uses the number of technology classes to estimate patent scope (and validate the assumption). I know many folks who would disagree with using patent classes as a measure of scope.

Even with these critiques, this paper is worth a read and some attention. I'd love to see more like it.

Monday, September 25, 2017

Studying the effect of granting vs. rejecting a given patent application can reveal little about the ex ante patent incentive (since ex ante decisions were already made), but it can say a lot about the ex post effect of patents on things like follow-on innovation. But directly comparing granted vs. rejected applications is problematic because one might expect there to be important differences between the underlying inventions and their applicants. In an ideal (for a social scientist) world, some patent applications would be randomly granted or denied in a randomized controlled trial, allowing for a rigorous comparison. There are obviously problems with doing this in the real world—but it turns out that the real world comes close enough.

The USPTO does not randomly grant application A and reject application B, but it does often assign (as good as randomly) application A to a lenient examiner who is very likely to grant, while assigning B to a strict examiner who is very likely to reject. Thus, patent examiner leniency can be used as an instrumental variable for which patent applications are granted. This approach was pioneered by Bhaven Sampat and Heidi Williams in How Do Patents Affect Follow-on Innovation? Evidence from the Human Genome, in which they used this approach to concluded that on average, gene patents appear to have had no effect on follow-on innovation.

Since their seminal work, I have seen a growing number of other scholars adopt this approach, including these recent papers:

Monday, September 18, 2017

In my IP seminar, I ask students to pick an article to present in class for a critical style and substance review. This year, one of my students picked an article about copyright and tattoos, a very live issue. The article was decent enough, raising many concerns about tattoos: Is human skin fixed? Is it a copy? How do you deposit it at the Library of Congress? (answer: photographs) What rights are there to modify it? To photograph it? Why is it ok for photographers to take pictures, but not ok for video game companies to emulate them? Can they be removed or modified under VARA (which protects against such things for visual art)?

It occurred to me that we ask many of these same questions with architecture, and that the architectural rules have solved the problem. You can take pictures of buildings. You can modify and destroy buildings. You register buildings by depositing plans and photographs. Standard features are not protectible (sorry, no teardrop, RIP, and Mom tattoo protection). But you can't copy building designs. If we view tattoos on the body as a design incorporated into a physical structure (the human body), it all makes sense, and solves many of our definitional and protection problems.

Tattoos have experienced a significant rise in popularity over the last several decades, and in particular an explosion in popularity in the 2000s and 2010s. Despite this rising popularity and acceptance, the actual mechanics of tattoo ownership and copyright remain very much an issue of first impression before the courts. A series of high-priced lawsuits involving famous athletes and celebrities have come close to the Supreme Court at times, but were ultimately settled before any precedent could be set. This article describes a history of tattoos and how they might be seen to fit in to existing copyright law, and then proposes a scheme by which tattoo copyrights would be bifurcated similar to architecture under the Architectural Works Copyright Protection Act.

It's a whole article, so Parker spends more time developing the theory and dealing with topics such as joint ownership than I do in my glib recap. For those interested in this topic, it's certainly a thought-provoking analogy worth considering.

Bleistein v. Donaldson Lithographing Co., is a well-known early twentieth century copyright decision of the U.S. Supreme Court. In his opinion for the majority, Justice Holmes is taken to have articulated two central propositions about the working of copyright law. The first is the idea that copyright's originality requirement may be satisfied by the notion of "personality," or the "personal reaction of an individual upon nature," which was satisfied in just about every work of authorship. The second is the principle of aesthetic neutrality, according to which "[it] would be a dangerous undertaking for persons trained only to the law to constitute themselves final judges of the worth of pictorial illustrations, outside of the narrowest and most obvious limits." Both of these propositions are today understood as relating to copyright's relatively toothless originality requirement, which few works ever fail to satisfy.In a paper recently published in the Columbia Law Review, Barton Beebe (NYU) unravels the intellectual history of Bleistein and concluded that for over a century, American copyright jurisprudence has relied on a misreading (and misunderstanding) of what Holmes was trying to do in his opinion. On the first proposition, he shows that Holmes was deeply influenced by American (rather than British or European) literary romanticism, which constructed the author in a "distinctively democratic—and more particularly, Emersonian—image
of everyday, common genius." (p. 370). On the second, Beebe argues that Holmes' comments on neutrality had little to do with the originality requirement, but were instead a response to the dissenting opinion that had sought to deny protection to the work at issue (an advertisement for a circus) because it did not "promote the progress," as mandated by the Constitution. The paper then examines how this misunderstanding (about both propositions) came to influence copyright jurisprudence, and Beebe then proceeds to suggest ways in which an accurate understanding of Bleistein may be used to reform crucial aspects of modern copyright law. The paper is well worth a read for anyone interested in copyright.Beebe's examination of Holmes' views on progress, personality and literary romanticism did however raise a question for me about the unity (or coherence) of Holmes' views, especially given that he was a polymath. Long-regarded as a Legal Realist who thought about legal doctrine in largely functional and instrumental terms, Bleistein's commonly (mis)understood insights about originality comport well with Holmes' pragmatic worldview. His treatment of originality as a narrow (and normatively empty) concept, for instance, sits well with his anti-conceptualism and critique of formalist thinking. But if Holmes really did not intend for originality to be a banal and content-less standard (as Beebe suggests), how might he have squared its innate indeterminacy with his Realist thinking? Does Beebe's reading of Bleistein suggest that Holmes was not a Legal Realist after all when it came to questions of copyright law and its relationship to aesthetic progress? This of course isn't Beebe's inquiry in the paper (nor should it be, given the other important questions that it addresses), but the possibility of revising our view of Holmes intrigued me.

Two principles lie at the core of federal Indian law. First, tribes possess inherent sovereignty, although their authority can be restricted through treaty, federal statute, or when inconsistent with their dependent status. Second, Congress possesses plenary power over tribes, which means it can alter or even abolish tribal sovereignty at will.

Tribal sovereign immunity flows from tribes’ sovereign status. Although the Supreme Court at one point described tribal sovereign immunity as an “accident,” the doctrine’s creation in the late nineteenth century in fact closely paralleled contemporaneous rationales for the development of state, federal, and foreign sovereign immunity. But the Court’s tone is characteristic of its treatment of tribal sovereign immunity: even as the Court has upheld the principle, it has done so reluctantly, even hinting to Congress that it should cabin its scope. This language isn’t surprising. The Court hasn’t been a friendly place for tribes for nearly forty years, with repeated decisions imposing ever-increasing restrictions on tribes’ jurisdiction and authority. What is surprising is that tribal sovereign immunity has avoided this fate. The black-letter law has remained largely unchanged, narrowly surviving a 2014 Court decision that saw four Justices suggest that the doctrine should be curtailed or even abolished.

Monday, September 11, 2017

It's good to be returning from a longish hiatus. I've just taken over as the Associate Dean for Faculty Research; needless to say, it's kept me busier that I would like. But I'm back, and hope to resume regular blogging.

My first entry has been sitting on my desk (errrr, my email) for about six months. In 2011 Bessen, Meurer, and Ford published The Private and Social Costs of Patent Trolls, which was received with much fanfare. Its findings of nearly $500 billion in market value decrease over a 20 year period, and $80 billion losses a year for four years in the late 2000's garnered significant attention; the paper has been downloaded more than 5000 times on SSRN.

Enter Emiliano Giudici and Justin Robert Blount, both of Stephen F. Austin Business School. They have attempted to replicate the findings of Bessen, Meurer, and Ford with newer data. The results are pretty stark: they find no significant evidence of loss at all. They also attribute the findings of the prior paper to a few outliers, among other possible explanations. These are really important findings. Their paper has fewer than 50 downloads. The abstract is here:

An ongoing debate in patent law involves the role that “non-practicing entities,” sometimes called “patent trolls” serve in the patent system. Some argue that they serve as valuable market intermediaries and other argue that they are a drain on innovation and an impediment to a well-functioning patent system. In this article, we add to the data available in this debate by conducting an event study that analyzes the market reaction to patent litigation filed by large, “mass-aggregator” NPE entities against large publicly traded companies. This study advances the literature by attempting to reproduce the results of previous event studies done in this area on newer market data and also by subjecting the event study results to more rigorous statistical analysis. In contrast to a previous event study, in our study we found that the market reacted little, if at all, to the patent litigation filed by large NPEs.

This paper is a useful read beyond the empirics. It does a good job explaining the background, the prior study, and critiques of the prior study. It is also circumspect in its critique - focusing more on the inferences to be drawn from the study than the methods. This is a key point: I'm not a fan of event studies for a variety of reasons. But that doesn't mean that I think event studies are somehow unsound methodologically. It just means that our takeaways from them have to be tempered by the limitations. And I've always been troubled that the key takeaways from Bessen, Meurer & Ford were outsized (especially in the media) compared to the method.

But Giudici and Blount embrace the event study, weaknesses and all, and do not find the same results. This, I think, is an important finding and worthy of publicity. That said, there are some critiques, which I'll note after the break.

Natalie Ram (Baltimore Law) applies the tools of innovation policy to the problem of criminal justice technology in her latest article, Innovating Criminal Justice (forthcoming in the Northwestern University Law Review), which is worth a read by innovation and criminal law scholars alike. Her dive into privately developed criminal justice technologies—"[f]rom secret stingray devices that can pinpoint a suspect’s location to source code secrecy surrounding alcohol breath test machines, advanced forensic DNA analysis tools, and recidivism risk statistic software"—provides both a useful reminder that optimal innovation policy is context specific and a worrying depiction of the problems that over-reliance on trade secrecy has wrought in this field.

She recounts how trade secrecy law has often been used to shield criminal justice technologies from outside scrutiny. For example, criminal defense lawyers have been unable to examine the source code for TrueAllele, a private software program for analyzing difficult DNA mixtures. Similarly, the manufacturer of Intoxilyzer, a breath test, has fought efforts for disclosure of its source code. But access to the algorithms and other technical details used for generating incriminating evidence is important for identifying errors and weaknesses, increasing confidence in their reliability (and in the criminal justice system more broadly), and promoting follow-on innovations. Ram also argues that in some cases, secrecy may raise constitutional concerns under the Fourth Amendment, the Due Process Clause, or the Confrontation Clause.

Drawing on the full innovation policy toolbox, Ram argues that contrary to the claims of developers of these technologies, trade secret protection is not essential for the production of useful innovation in this field: "The government has at its disposal a multitude of alternative policy mechanisms to spur innovation, none of which mandate secrecy and most of which will easily accommodate a robust disclosure requirement." Patent law, for example, has the advantage of increased disclosure compared with trade secrecy. Although some of the key technologies Ram discusses are algorithms that may not be patentable subject matter post-Alice, to the extent patent-like protection is desirable, regulatory exclusivities could be created for approved (and disclosed) technologies. R&D tax incentives for such technologies also could be conditioned on public disclosure.

But one of Ram's most interesting points is that the main advantage of patents and taxes over other innovation policy tools—eliciting information about the value of technologies based their market demand—is significantly weakened for most criminal justice technologies for which the government is the only significant purchaser. For example, there is little private demand for recidivism risk statistical packages. Thus, to the extent added incentives are needed, this may be a field in which the most effective tools are government-set innovation rewards—grants, other direct spending, and innovation inducement prizes—that are conditioned on public accessibility of the resulting algorithms and other technologies. In some cases, agencies looking for innovations may even be able to collaborate at no financial cost with academics such as law professors or other social scientists who are looking for opportunities to conduct rigorous field tests.

Criminal justice technologies are not the only field of innovation in which trade secrecy can pose significant social costs, though most prior discussions I have seen are focused on purely medical technologies. For instance, Nicholson Price and Arti Rai have argued that secrecy in biologic manufacturing is a major public policy problem, and a number of scholars (including Bob Cook-Deegan et al., Dan Burk, and Brenda Simon & Ted Sichelman) have discussed the problems with secrecy over clinical data such as genetic testing information. It may be worth thinking more broadly about the competing costs and benefits of trade secrecy and disclosure in certain areas—while keeping in mind that the inability to keep secrets does not mean the end of innovation in a given field.

Tuesday, September 5, 2017

There are two dominant utilitarian frameworks for justifying trademark law. Some view trademark protection as necessary to shield consumers from confusion about the source of market offerings, and to reduce consumers' "search costs" in finding things they want. Others view trademark protection as necessary to secure producers' incentives to invest in "quality". I personally am comfortable with both justifications for this field of law. But I have always been unclear as to how trademarks work as property. With certain caveats, I do not find it difficult to conceive of the patented and copyrighted aspects of inventions and creative writings as "property" on the theory that we generally create property rights in subject matter that we want more of. But surely Congress did not pass the Lanham Act in 1946 and codify common law trademark protection simply because Congress wanted companies to invest in catchy names and fancy logos?

In his new paper, Trademark As A Property Right, Adam Mossoff seeks to clarify this confusion and convince people that trademarks are property rights based on Locke's labor theory. In short, Mossoff's view is that trademarks are not a property right on their own; rather, trademarks are a property right derived from the underlying property right of goodwill. Read more at the jump.

Saturday, September 2, 2017

I thought this short Twitter thread was such a helpful, concise summary of some of NYU economist Petra Moser's excellent work—and the incentive/access tradeoff of IP laws—that it was worth memorializing in a blog post. You can read more about Moser's work on her website.