innovation can suffer from two distinct problems: markets
can fail to provide strong incentives to invest in R&D, and they can fail
to provide strong incentives for learning new skills. Underinvestment in
R&D is not the only problem affecting innovation. It might not even be the
most important problem. ... There is simply no justification for focusing
innovation policy exclusively on remedying underinvestment in R&D,
especially since most firms report that patents, which are supposed to correct
this underinvestment, are relatively unimportant for obtaining profits on their
innovations.

The takeaway is that protecting inventions with patents and copyrights
can't be the sole function of an effective innovation policy. Governments need
to focus on a much broader range of policies to "encourage broad-based
learning of new technical skills, including vocational education, government
procurement, employment law, trade secrecy, and patents."

At IP
Scholars in Chicago this year, I'll be presenting my new paper Patent
Nationally, Innovate Locally.Like Bessen, I will talk
about a broad range of innovation incentives that focus on research and
technology commercialization, as well as public investments in STEM education,
worker training, and public infrastructure. I'll argue, however, that when
intellectual property rights are not the chosen mechanism, many of these
incentives should come from sub-national governments like states and cities
because they are the smallest jurisdictions that internalize the
immediate economic impacts of public investments in innovation.* While
states cannot internalize the benefits of patent and copyright regimes that
result in widespread disclosure of easily transferable information, they can
internalize the benefits of innovation finance (direct expenditures of taxpayer
revenues on innovation) especially when those expenditures go towards improving
the education, skills, and knowledge-base of the local labor force.

Innovation finance (IF) is an important new frontier in IP law scholarship. Not only does innovation finance supplement federal IP rights by correcting
market failures in technology commercialization and alleviating
some of the inefficiencies created by patents and copyrights, it also
takes into account Bessen's point: "markets can fail to provide strong
incentives to invest in R&D, and they can fail to provide strong incentives
for learning new skills." Both market failures are important, and the latter may be even
more important than the former. But if we really want to focus on a broader
range of policies like government procurement and support for public education to "encourage broad-based learning of new technical
skills," as Bessen suggests, then we need to start looking at state and
local governments.

To understand this point, take the example of a government prize for developing
a better way to manufacture cars without using as many resources (e.g. 3D
printing). If the federal government gives the prize, this makes some sense:
assuming the prize hits its mark, national taxpayers will eventually benefit
when the innovation is perfected and widely adopted, and the information on how
to do it becomes public. But the impacts of the prize are going to be very different for different parts of the country. First off, the prize winner has to locate its research and
operations somewhere. Presumably, it's going to choose a state like
Michigan or Ohio with the resources, facilities, and human knowledge-base to do
this kind of research and experimentation. The immediate benefits for local firms and
residents are obvious: jobs, tax revenues, business for local companies. There is also a less perceptible but far more important benefit: easier access to new technical knowledge coming out of the
experiments and inside information on emerging market developments. Plentiful research suggests that a lot of knowledge is hard to transfer and that effective exchange requires proximity, especially when science-based research and unfamiliar technology are involved. The implication for local officials seeking to boost the regional economy is clear: the more innovation that happens in your jurisdiction and the more residents who gain skills in an important new field, the better off your state or city will be. (This is the basis for innovation cluster theory and the idea that regions gain competitive advantages from localized knowledge exchange, originally discussed by UC Berkeley's AnnaLee Saxenian.)

Given that the immediate economic impacts of the 3D printing prize, including the tax revenues and most of the spillovers, are geographically localized to certain regions, do we really want federal policymakers
designing these types of incentives, and do we really want taxpayers in states like Alaska and
Arizona footing the bill? Or do we want significant input – both
political and financial – from the places in which the innovation is occurring?
I think the answer is the latter. The benefits of decentralizing
fiscal policy are numerous. I see at least two major benefits in this
case: fairer shouldering of tax burdens, and more efficient innovation policies
as a result of the better information and stronger incentives of local
officials. Not only are they aware of the capabilities and needs of the local economy but they can act swiftly in response to local problems, liberated from the wrangling of "earmark politics" at the national
level. The same principles apply to education and incentives for learning new skills – the second prong of Bessen's revitalized innovation policy. For example, would we expect national policymakers, who act in the national interest and are beholden to federal taxpayers, to supply the right amount of vocational training for future workers in the newly invented 3D printing automobile industry of my hypothetical? No: we would expect the main push for this kind of training to come from a state like Michigan with the right mix of interested workers and industry players.

In short, I suggest that innovation policy in the United States is not federal. It is bifurcated: the federal government protects exclusive rights in new inventions and original expression using patents and copyrights; states, cities and sub-national governments use innovation finance to capture the geographically localized economic benefits of innovation.

There are several responses to my argument. If innovation finance were all
local, then wouldn't there be a major under-supply of research, especially for innovations without a clear market, like research into rare debilitating diseases or (until Elon Musk) space exploration? Wouldn't states compete with each other and end up spending
way too much to attract firms into their jurisdictions? Aren't local
politicians all
corrupt anyway? I agree that all these risks exist. This is why I discuss a
variety of instances where the federal government has an important role to
play. Besides protecting copyrights and patents in new inventions, the federal
government does a lot of direct financing for innovation too. This money goes towards education, basic research, and mission R&D (mainly in national defense) – all
of which produce pervasive national spillovers as well as localized ones. On the flip side, the federal government also has a variety of means for controlling and coordinating
the actions of sub-national governments in order to reduce corruption, wasted
expenditures and "beggar thy neighbor" competition. Some of these
preemptive forces come from discretionary judicial doctrines like the Dormant
Commerce Clause (admittedly a weak source of limits on states); others are or perhaps should be statutory (the
Patent Act??).

These incentives include a 30% federal tax credit (set to expire at the end of 2016), as well as many state-level incentives, such as volumentrically reduced subsidies to benefit first movers, net metering policies requiring credits to consumers who produce excess energy, and financial regulations that allow third-party financing to help consumers avoid upfront capital expenses. As they note, "the details matter," and "[n]ot all renewable portfolio standards are equal." This paper seems to nicely encapsulate many of those details.

Monday, July 27, 2015

I highly recommend two recently posted articles on declining innovation incentives for diagnostic tests, particularly due to changes in patentable subject matter doctrine. In Innovation Law and Policy: Preserving the Future of Personalized Medicine, Rachel Sachs (Petrie-Flom Fellow at Harvard Law) examines the intersection of IP with FDA regulation and health law, joining a growing body of scholarship that seeks to contextualize IP in a broader economic context. Here is the abstract:

Personalized medicine is the future of health care, and as such incentives for innovation in personalized technologies have rightly received attention from judges, policymakers, and legal scholars. Yet their attention too often focuses on only one area of law, to the exclusion of other areas that may have an equal or greater effect on real-world conditions. And because patent law, FDA regulation, and health law work together to affect incentives for innovation, they must be considered jointly. This Article will examine these systems together in the area of diagnostic tests, an aspect of personalized medicine which has seen recent developments in all three systems. Over the last five years, the FDA, Congress, Federal Circuit, and Supreme Court have dealt three separate blows to incentives for innovation in diagnostic tests: they have made it more expensive to develop diagnostics, made it more difficult to obtain and enforce patents on them, and reduced the amount innovators can expect to recoup in the market. Each of these changes may have had a marginal effect on its own, but when considered together, the system has likely gone too far in disincentivizing desperately needed innovation in diagnostic technologies. Fortunately, just as each legal system has contributed to the problem, each system can also be used to solve it. This Article suggests specific legal interventions that can be used to restore an appropriate balance in incentives to innovate in diagnostic technologies.

Diagnostic testing helps caregivers and patients understand a patient’s condition, predict future outcomes, select appropriate treatments, and determine whether treatment is working. Improvements in diagnostic testing are essential to bring about the long-heralded promise of personalized medicine. Yet it seems increasingly clear that most important advances in this type of medical technology lie outside the boundaries of patent-eligible subject matter.

The clarity of this conclusion has been obscured by ambiguity in the recent decisions of the Supreme Court concerning patent eligibility. Since its 2010 decision in Bilski v. Kappos, the Court has followed a discipline of limiting judicial exclusions from the statutory categories of patentable subject matter to a finite list repeatedly articulated in the Court’s own prior decisions for “laws of nature, physical phenomena, and abstract ideas,” while declining to embrace other judicial exclusions that were never expressed in Supreme Court opinions. The result has been a series of decisions that, while upending a quarter century of lower court decisions and administrative practice, purport to be a straightforward application of ordinary principles of stare decisis. As the implications of these decisions are worked out, the Court’s robust understanding of the exclusions for laws of nature and abstract ideas seems to leave little room for patent protection for diagnostics.

This essay reviews recent decisions on patent-eligibility from the Supreme Court and the Federal Circuit to demonstrate the obstacles to patenting diagnostic methods under emerging law. Although the courts have used different analytical approaches in recent cases, the bottom line is consistent: diagnostic applications are not patent eligible. I then consider what the absence of patents might mean for the future of innovation in diagnostic testing.

As I have written, I think changes to patentable subject matter doctrine are an important problem for medical innovation, and that policymakers should think seriously about whether additional non-patent innovation incentives are needed in this area.

Thursday, July 23, 2015

In a previous post, I discussed
a district court decision holding that the process for resolving patent
disputes under the Biologics Price Competition and Innovation Act
(BPCIA) is optional. That post contains extensive background on the
BPCIA and its purpose of providing an abbreviated pathway for “biosimilar”
drugs to get to market and compete with their branded analogs,
resulting in lower prices for consumers. The bottom line is that, under
the BPCIA, makers of biosimilar products can rely on the clinical trial
data developed for the branded (or “reference”) product in order to
accelerate FDA approval. Nevertheless, the BPCIA provides 12 years of
data exclusivity to the manufacturer of the reference product. And
beyond that period, even if the biosimilar garners FDA approval, the
brand owner can try to continue to keep it out of the market by
asserting claims of patent infringement. The BPCIA provides for a
procedure involving pre-suit information exchange between the brand and
biosimilar makers—the so-called “patent dance”—that is intended to
apprise the brand of the biosimilar’s manufacturing process and narrow
down the number of patents to be be asserted. But the district court,
and now the Federal Circuit on appeal, have held that the biosimilar can lawfully refuse to participate in the patent dance.

When I was in law school, I was surprised (and fascinated) to learn how little scholars actually know about how patent laws affect innovation. My article Patent Experimentalism explains why this is such a hard empirical question, summarizes a lot of the empirical work that has been done, and analyzes the institutional design options (including policy randomization) to help make more empirical progress. Two of my favorites among the empirical pieces I discuss are by MIT economics professor Heidi Williams—one on IP-like contractual restrictions on human genes (summarized previously on this blog), and one on the skew in cancer drug R&D toward late-stage cancer patients (with Eric Budish and Ben Roin). In June, Williams posted a new paper that reflects on the challenges of measuring the relationship between patent strength and research investments, summarizes these two studies, and discusses directions for future research.

Although "a literal interpretation of the current set of available empirical evidence . . . would be that the patent system generates little social value," Williams explains that "drawing such a conclusion would be premature." The "dearth of empirical evidence" stems from two problems: measuring specific research investments, and "finding any variation (much less 'clean' or quasi-experimental variation) in patent protection." Her recent papers "identified and took advantage of new sources of variation in the effective intellectual property protection provided to different inventions." If you aren't familiar with those two papers, this piece contains a great summary.

Looking for similar kinds of variation seems like a promising avenue for future research, although Williams cautions against jumping quickly from her work to broad conclusions for patent policy. For example, while her work on contractual restrictions on human genes led to persistent decreases in follow-on research and commercial product development, her preliminary results from a follow-on project with Bhaven Sampat suggest that "on average gene patents have had no effect on follow-on innovation." As Williams notes, the U.S. Supreme Court has been concerned about the effects of patents on follow-on research in its recent forays into patentable subject matter, and perhaps further empirical work along these lines will help inform this muddled area of doctrine.

Thursday, July 16, 2015

Greg Mandel (Temple Law) has done some interesting empirical work on public perceptions of IP. In his latest work, Intellectual Property Law's Plagiarism Fallacy, he has collaborated with two psychologists, Anne Fast and Kristina Olson (University of Washington), on three new studies. They conclude that debates over whether IP should serve incentive or natural rights objectives are "orthogonal" to the most common perception about IP, which is that its function is to prevent plagiarism. They argue that this "plagiarism fallacy . . . . helps explain pervasive illegal infringing activity on the Internet" as stemming from a failure to understand what IP is rather than indifference toward IP rights.

Examination—the process of reviewing a patent application and deciding whether to grant the requested patent—improves patent quality in two ways. It acts as a substantive screen, filtering out meritless applications and improving meritorious ones. It also acts as a costly screen, discouraging applicants from seeking low-value patents. Yet despite these dual roles, the patent system has a substantial quality problem: it is both too easy to get a patent (because examiners grant invalid patents that should be filtered out by a substantive screen) and too cheap to do so (because examiners grant low-value nuisance patents that should be filtered out by a costly screen).

This article argues that these flaws in patent screening are both worse, and better, than has been recognized. They are worse because the flaws are not static; they are dynamic, interacting to reinforce each other. This interaction leads to a vicious cycle of more and more patents that should never have been granted. When patents are too easily obtained, that undermines the costly screen, because even a plainly invalid patent has a nuisance value greater than its cost. And when patents are too cheaply obtained, that undermines the substantive screen, because there will be more patent applications, and the examination system cannot scale indefinitely without sacrificing accuracy. The result is a cycle of more and more applications, being screened less and less accurately, to give more and more low-quality patents. And although it is hard to test directly if the quality of patent examination is falling, there is evidence suggesting that this cycle is affecting the patent system.

At the same time, things are better because this cycle may be surprisingly easy to solve. The cycle gives policymakers substantial flexibility in designing patent reforms, because the effect of a reform on one piece of the cycle will propagate to the rest of the cycle. Reformers can concentrate on the easiest places to make reforms (like reforming the litigation system) instead of trying to do the impossible (like eliminating examination errors). Such reforms would not only have local effects, but could help make the entire patent system work better.

Ford provides a refreshingly clear explanation of the two distinct roles that patent examination theoretically plays, and of the feedback loop between them.

Friday, July 10, 2015

Pharmaceutical companies sometimes engage in "product hopping," in which they attempt to move patients to a new product with longer patent protection before the generic version of an older drug becomes available. Product hopping was recently in the news with New York state's antitrust suit against Actavis for its decision to withdraw Namenda IR, its 2x/day Alzheimer's drug (with patent protection ending July 2015), to force patients to switch to Namenda XR, a 1x/day version (with patent protection until 2029). In an opinion by Judge Walker, the Second Circuit upheld a preliminary injunction barring withdrawal of Namenda IR prior to generic entry, concluding that the "hard switch crosses the line from persuasion to coercion and is anticompetitive."

The cost to consumers of product hopping that obstructs access to generic drugs is clear. But these marketing strategies raise another potential welfare loss that receives less attention: when a pharmaceutical company delays the introduction of a new drug version until just before patent protection on the old version is set to expire, that delay can harm consumers who prefer the new version. This later cost is the focus of a new empirical paper by Professor Brad Shapiro (Chicago Booth), Estimating the Cost of Strategic Entry Delay in Pharmaceuticals: The Case of Ambien CR.

Monday, July 6, 2015

Over at Prawfsblawg, Orly Lobel discusses the case of former Goldman Sachs programmer Sergey Aleynikov,who has had an up and down (more like down and up) experience dealing with criminal trade secret prosecutions. I think the case is worthy of discussion for a variety of reasons, but I will focus on how different viewpoints will color the facts of this case. Prof. Lobel describes this as a story of "secrecy hysteria," while I view this as a run of the mill "don't copy the source code" case.

I'll discuss my point of view briefly below, but I will admit my priors: I spent my career advising companies and employees in trade secrecy: how to protect them, how to exit without getting sued, and how to win lawsuits as plaintiffs and defendants. I probably represented plaintiffs and defendants with the same frequency, and -- of course -- my client was always right.

More facts after the jump. I should make clear that I've got no position on the criminal prosecutions; my views here are more about trade secrecy than whether the criminal laws should be used to protect them (or should have applied to this particular case). Prof. Lobel and I may well agree on the latter point.

Since there is no easy way to index or search through most patents, it is exceedingly difficult (if not impossible) to know if one is infringing a patent. In some industries, firms simply ignore patents, because it is less expensive to pay damages ex post than to do patent clearance searches ex ante. Larger numbers of patents exacerbate this problem. Christina Mulligan and Timothy Lee provide an excellent description of the problem of patent clearance searches in their article on Scaling the Patent System. One sentence in particular drives the problem home: “In software, for example, patent clearance by all firms would require many times more hours of legal research than all patent lawyers in the United States can bill in a year.”

Wednesday, July 1, 2015

Fiona Scott Morton (Yale School of Management) and Carl Shapiro (Berkeley School of Business) have posted Patent Assertions: Are We Any Closer to Aligning Reward to Contribution?, which has a nice summary of some recent developments related to patent assertion entities (PAEs) and standard-essential patents (SEPs), even for readers who will disagree with their ultimate conclusions.

Scott Morton and Shapiro argue that there is often a "divergence between the reward that a patent holder can obtain by asserting its patent and the social contribution" of the patent. They do not attempt to measure the social value from patents; rather, their argument is based on economic theory. PAEs can impose high litigation costs with little downside risk, especially when they assert low-quality patents for their nuisance value. And royalty stacking and patent hold-up (backed up by the threat of an injunction) can increase the reward to patentees beyond the patent's value, especially for products that comply with standards for which there are many SEPs.

Monday, June 22, 2015

Just a short note that the court has affirmed Brulotte v. Thys in Kimble v. Marvel Entertainment. The question was a simple one: can a patent owner charge a royalty for sales after the patent expires? Brulotte said no. But the economic rationale for that has been whittled away, just as much has been in antitrust. But the court today said...no. Stare decisis governs, and the reasons for overturning are just not great enough.

An interesting aspect of this dispute is that many folks with whom I often disagree on patent policy were in favor of lifting the post-expiration ban, while I never thought it was that big a deal because you can always creatively license around it.

The good news is that the Court has affirmed my latter assumption. The most important quote in the whole case (at least on my very quick reading) may be (citations omitted):

And parties have still more options when a licensing agreement covers either multiple patents or additional non-patent rights. Under Brulotte, royalties may run until the latest-running patent covered in the parties’ agreement expires. Too, post-expiration royalties are allowable so long as tied to a non-patent right—even when closely related to a patent. That means, for example, that a license involving both a patent and a trade secret can set a 5% royalty during the patent period (as compensation for the two combined) and a 4% royalty afterward (as payment for the trade secret alone). Finally and most broadly, Brulotte poses no bar to business arrangements other than royalties—all kinds of joint ventures, for example—that enable parties to share the risks and rewards of commercializing an invention.

The trade secret example is especially important. As I note in my article Patent Challenges and Royalty Inflation, there is uncertainty about how much one must drop the license fee for trade secrets. For example, I cite one case where a fifty percent drop when the patent expires was still anticompetitive under Brulotte.

But not all patents come with trade secrets. The question is whether an optional know-how license will be sufficient. If I wanted to try for post-expiration royalties, I'd give it a shot but not count on it.

Tuesday, June 16, 2015

As Lisa predicted a couple weeks ago, the Federal Circuit issued a new en banc (11-1) opinion today in Williamson v. Citrix without argument or further briefing. Patently-O has full coverage, so I'll get right to the core issue: the Federal Circuit reversed its prior precedent on functional claiming, but not all the way.

By way of background, if you claim a "means plus function" element (e.g., means for adding two numbers) then you need to disclose the structure for your means in the specification, which includes both the hardware and the algorithm (a general purpose computer programmed to take two numbers as an input, add them together, and report the sum as an output). If you don't put that structure in the specification, your claim is invalid as indefinite. Seem absurd for easy or common functions? More on that later.

The question is what you do when the word means is replaced with something else, like "module" or "unit" or "logic." The presumption has long been that this would not be means plus function, and it would be treated like structure unless the opposing party could convince the court that it really was a means plus function in disguise. Starting in about 2004, the Federal Circuit doubled down on this rule, making this a strong presumption against means plus function that was very difficult to overcome. As a result, the courts affirmed a bunch of patents that claimed functions but didn't actually teach how to do them. More on that later.

In this case, the court backtracked to pre-2004 rules. Rather than looking at the words, we look to see whether the limitation is really just claiming a means for doing a function, or whether the limitation has sufficient structure built right in. For example, you might have a limitation "adding module programmed to take two numbers as an input, add them together, and report the sum as an output." This is clearly functional, but the structure is right there in the limitation. Judge Reyna (along with some of my colleagues in the academy) would go further and argue that any functional claiming has to be in the specification, but I've never been convinced by that argument, in part because you can always put algorithms and structure right into claims. Judge Newman would have stuck with the formalistic requirement of requiring "means" to mean "means plus function," but it is clear that this view is currently disfavored.

Friday, June 5, 2015

Last November, the Federal Circuit panel opinion in Williamson v. Citrix held that the district court erroneously construed the limitation "distributed learning control module" as a means-plus-function expression. The majority emphasized that failure to use the word "means" in a claim limitation creates a strong rebuttable presumption that it is not a means-plus-function limitation. In dissent, Judge Reyna argued that the limitation simply substituted the "nonce" word "module" for "means." On December 5 (exactly six months ago), Citrix et al. filed for rehearing en banc, supported by amicus briefs by the EFF and a group of IP professors (including me). The IP professor brief, written by Mark Lemley, argues that patentees have exploited the Federal Circuit's inconsistency in this area to engage in functional claiming without satisfying means-plus-function claim rules.

Based on the timelines in the Federal Circuit's internal operating procedures, it seems improbable that the court could still be deciding whether to act on the rehearing petition. So perhaps the court granted rehearing en banc without argument? Issuing an en banc decision can take a while—Akamai v. Limelight took over 9 months from argument to opinion—but that was unusual, so maybe we will hear something soon. (Here is the Williamson v. Citrix docket on Bloomberg Law, subscription required.)