In February 2017, the Delaware Court of Chancery faced a conundrum: following settlement of a shareholder action after a contested merger, shareholders representing 49,164,415 shares claimed settlement proceeds, but the class contained only 36,793,758 shares.[1] By definition, holders of over 12 million of these shares must have lacked entitlement to settlement disbursements, yet all claimant shareholders presented valid evidence of ownership. Investigation by class attorneys failed to establish the “current” owners of class shares, as did investigation at their request by the Depository Trust Company (“DTC”), a subsidiary of the Depository Trust & Clearing Corporation (“DTCC”), the major U.S. clearing house and equivalent of Euroclear and Clearstream. DTC was created in 1973 to facilitate clearing and settlement of U.S. and foreign securities by retaining custody and changing title by book-entry.[2] The Court cut the Gordian knot by disregarding present claims and ordering settlement proceeds distributed to holders of record identified for purposes of merger consideration – foregoing some valid beneficial owners but also preventing dilution of settlement proceeds by disbursing them to 25% more shareholders than should have existed at the time of merger.

This reflects a systemic issue affecting transactions where a centralized ledger held by DTC proves unable to determine ownership of registered shares at a specific point. An investigation helped to clarify what had produced a disparity as striking as 25% between holders of record with DTC and beneficial owners able to prove a valid claim.[3] But it was still impossible to ascertain the valid claimants.

In re Dole Food Co.’s main culprits for the discrepancy were delays in registering trades and short-selling. A U.S. equity market convention based on applicable law, market structures and technological determinants requires stock trades settled T+3, within three business days.[4] That means even if DTC applies a one-day freeze[5] on trading a company’s stock, pending merger to determine shareholders of record, this snapshot still does not reflect trades within the previous two days. For actively traded stock, this can result in volumes of millions of shares temporarily unregistered by DTC. Even more difficult to trace are short sales where holders of record are unaware that their stock was borrowed by other investors for sale to third parties and returned after closing out their short position, which may result in contemporaneous ownership claims to the same share. Ample case law demonstrates similar identification issues with proxy voting and merger consideration payouts.[6]

As Dole Food Co. mentions, in a footnote no less,[7] these systemic issues could be prevented by using a decentralized ledger, where every broker-dealer could instantly record trades expeditiously made available to all participants, clarifying ownership for every share within the system at every moment with minimal delays. That is one rationale behind blockchain. Incidentally, Dole Foods Co. is the only court opinion in the entire Lexis Nexis database of every published and the majority of unpublished judicial opinions in the U.S. that mentions the term blockchain anywhere in its text – and even here, the term is used in a footnote, not in the text of the opinion itself, in the context of Governor Jack Markell’s Delaware Blockchain Initiative. While a Westlaw search brings up some 75 cases addressing the subject, Dole Food Co. is still the only one that contains the term.

II. The mechanics of blockchain

Blockchain is an algorithm encoding information allowing amendment of a historical record – the “chain” – with subsequent transactions – “blocks” – in a way near-impossible to alter/forge retroactively. This is achieved by distributed ledgers keeping copies of a record on computers of all system participants (“network nodes” or “miners” in the context of virtual currencies). In large networks like bitcoin, millions of computers, often assembled in data centers, hold blockchain records. Inconsistencies between individual information chains are detected and corrected by the majority of the system’s units’ processor power. In this “proof of work” concept, the largest total computing power makes/executes decisions.[8] A record’s validity is established “democratically”: if the majority of the system’s computing power presents one version of a full blockchain over another, that version prevails and overrides differing records held by remaining nodes.[9] This self-correcting system applies to historical but not to new records that are added to the chain as blocks encrypted using private and public keys, documenting every new transaction. Once verified and added, blocks in a chain become practically unalterable, creating robust security for distributed-ledger record keeping.[10] Forging an existing record requires simultaneously hijacking vast numbers of networked computers to override a target blockchain.

Public blockchains are not controlled by any private or governmental party serving as record-keeper or accountant. Evolution occurs through user consensus producing publicly verifiable transactions. Public blockchains are accessible to anyone but inherently slow: updating a transaction on thousands of unaffiliated computers around the world may take hours.[11] Financial technologies relying on such “unpermissioned” networks[12] include virtual currencies like bitcoin.[13]

Private blockchains, maintained by consortia or private entities and protected by stringent security protocols, restrict access to preselected participants, thus increasing security and transaction speed. In a private network, it may be possible to alter blockchains after the fact, perhaps to correct errors.[14] This type appears more suitable for government-maintained record-keeping, which could be used, for example, to track ownership of assets such as real estate, securities, or gems.

Before expanding to general record-keeping transactions, blockchain was first used in bitcoin,[15] a virtual currency. Then, blockchain evolved towards self-executing smart contracts using ethereum technology[16] and may eventually reach an “Internet of Agreements.”[17] Smart contracts are self-executing contracts written into lines of code that extend the utility of blockchain from keeping a record of transactions to automatically implementing the terms of the contract.[18] Ethereum provides an open-source, public distributed platform that offers smart contract scripting functionality through a Turing-complete virtual machine that can execute scripts through an international network of public nodes.[19] While its promise for applications such as virtual currencies and payments is obvious and currently explored by major financial institutions,[20] blockchain’s real strength lies in authentication and keeping records up-to-date, especially for valuable, highly liquid assets like securities.

III. Regulatory responses

A. Federal government

Regulatory responses to emerging technologies, and to blockchain in particular, range from excitement to suspicion to indifference. The U.S. government’s approach to blockchain and bitcoin issues exemplifies this: Congress held altogether seven hearings involving blockchain and digital currencies – all between 2013 and 2017 – addressing concerns ranging from Caribbean development to U.S.–China relations, to the impact of virtual currencies on protecting small business and national security from the impact of disruptive technologies and cybersecurity threats.[21] Still, just one federal bill on blockchain, regarding virtual currencies, was proposed: On December 1, 2014, and January 2, 2015, Congressman Steve Stockman (R-TX) proposed, within a month of each other, two virtually identical bills, the Cryptocurrency Protocol Protection and Moratorium Act [22] and the Online Market Protection Act of 2014.[23] Even these bills were far from revolutionary: they just proposed a five-year moratorium on federal and state regulation of cryptocurrencies.[24]

Additionally, the bills proposed a puzzlingly inconsistent tax treatment of virtual currencies: on the one hand, it required cryptocurrencies to be treated for tax purposes as currency rather than property.[25] On the other hand, it allowed taxation only upon monetization of cryptocurrency, i.e., upon conversion into dollars or other government-issued currency,[26] which would also apply to taxing income from mining of cryptocurrencies, i.e., from processing and recording cryptocurrency transactions in a distributed ledger.[27]

Referred to the Committee on Financial Services, to Ways and Means, and to Agriculture, the bills, unsurprisingly, never saw the light of day again.

Another federal legislative attempt mentioning blockchain (once) was a Congressional resolution proposed July 14, 2016, tabled after forty minutes’ floor discussion and never resumed, formally titled “Resolution expressing the sense of the House of Representatives that the United States should adopt a national policy for technology to promote consumers’ access to financial tools and online commerce to promote economic growth and consumer empowerment.”[28] In the final paragraph of its preamble, this draft resolution recognized blockchain technology’s potential of financial services, payments, health care, energy, property management, and intellectual property management.[29]

Since the federal government has not exercised its constitutional preemptive power to regulate blockchain to the exclusion of states[30] (as it generally does with financial regulation) or even expressed intention to do so, regardless of the interest of federal agencies,[31] states remain free to introduce their own rules and regulations. Some have attempted to do that, however haltingly.

B. State jurisdictions

Arizona

In 2017, without much media attention, Arizona fast-tracked a blockchain records recognition bill amending existing legislation on electronic records.[32] Arizona House Bill 2417, introduced February 6, 2017, passed both state chambers and was signed into law March 29, 2017.[33] It amended Title 44, chapter 26, Arizona Revised Statutes, by adding article 5, Blockchain technology. This amendment recognized a signature secured through blockchain technology as an electronic signature; a record or contract secured through blockchain technology as an electronic record; smart contracts[34] as valid; and ownership and other rights in interstate or foreign commerce as remaining valid if subsequently secured by blockchain technology.[35]

California

California, home to Silicon Valley, failed to pass its virtual currencies bill after much media hype. California Assembly Bill 1326, an act to add Division 11 to the Financial Code (commencing with Section 26000), relating to virtual currency, was introduced on February 27, 2015, but failed in the Senate on August 11, 2016.[36] California has attempted no further regulation of blockchain or digital currencies to date.

AB 1326 tried to improve on New York’s regulation of virtual currency businesses by nominally relaxing its requirements, exempting such businesses under certain circumstances from needing a money transmission license (as required by New York) in addition to a virtual currency license; and by exempting network administrators, software providers, and exchange services from California’s proposed virtual currency law. But during public hearings, smaller fintech companies[37] voiced especially strong opposition, understandably considering the proposed regulations’ reach: AB 1326 prohibited engaging in digital currency business without enrolling in the program by obtaining a license from the Commissioner of Business Oversight unless specifically exempted. It specified, inter alia, capital requirements, customer receipt requirements, cybersecurity information reporting, audit reports, and fees, as well as regulations on advertising.

Delaware

In May 2016, Delaware’s then-Governor Jack Markell announced a state initiative to adapt regulations to blockchain technology.[38] In response to the Governor’s request, the Delaware State Bar Association’s Corporation Law Council presented, among other proposals, an amendment to the Delaware General Corporation Law that would allow Delaware entities to use distributed ledger technology to record stock transfers.[39] The initiative was enthusiastically welcomed by Delaware Chancery Court Vice Chancellor J. Travis Laster who, in his keynote address to the Council of Institutional Investors in September 2016, called blockchain “a plunger that you can use to clean up the plumbing” of capital markets for the benefit of investors.[40]

“Smart records” technology implemented at the Delaware Public Archives in cooperation with Symbiont, a blockchain startup,[42] uses distributed ledger to automate compliance with document retention laws pertaining to destruction and retention of archival documents.[43]

“Smart UCC filings” will replace slow, error-prone paper filings by distributed ledger using the technology tested at the Delaware Public Archives. “Smart UCC filings” will automate the release or renewal of UCC filings and collateral, increase the speed of UCC searches, increase the accuracy of filings and thus prevent fraud and cut costs. [44]

Distributed ledger shares issued and tracked through blockchain render central accountants and custodians like DTC superfluous, reducing delays and improving accuracy of record-keeping, preventing many class actions in the current system where determination of beneficial owners or voting rights is possible only in a probabilistic manner. [45]

However, the DGCL amendments would only facilitate issuance of new shares registered on a distributed ledger. For existing shares, transition to distributed ledger would be more complicated, since only uncertificated shares would qualify.[46] Although DGCL Section 158 allows boards of directors to issue resolutions to qualify some or all of their corporation’s stock as uncertificated shares, existing certified shares would not be covered until certificates were surrendered to the company.[47] Thus, a corporation unable to recover share certificates would be unable to transition to distributed ledger shares.

Furthermore, trading shares on secondary markets would not be subject to Delaware’s new law, since trade registration is regulated separately and DGCL only affects transfers of record.[48] It remains unclear how secondary markets will respond to distributed ledger share registration since real-time clearing and settlement using blockchain would require participation of traders in record-keeping of distributed ledger transactions.

Hawaii

Hawaii introduced on January 25, 2017 House Draft “An act relating to economic development.”[49] The bill establishes “a working group consisting of representation from the public and private sectors to examine, educate, and promote best practices for enabling blockchain technology to benefit local industries, residents, and the State of Hawaii.” Hawaii’s bill recognizes industries potentially affected by blockchain: (1) identity and access management (digital IDs); (2) health care (health care records); (3) legal (“tracking, verification, authentication, and record keeping of court orders, contracts, titles, and records”); (4) financial services (blockchain already in use); (5) manufacturing (provenance of goods and services and authentication of goods); and (6) tourism (local bitcoin payments).[50] It awaits final vote in the Senate Ways and Means Committee.

Illinois

On March 21, 2017, the Illinois House of Representatives passed House Joint Resolution 25, which created a task force to study blockchain benefits for recordkeeping by local governments.[51] It went to the Senate Committee on Assignments on March 28, 2017 and is still pending. If adopted, this task force study would be a first step for Illinois to transferring record keeping to distributed ledger.

The bill was partly a response to Chicago’s Cook County exploration of blockchain-based records of property title transfers and liens, the first such attempt by a local Recorder’s Office in the U.S.[52] The program was announced in October 2016.[53]

Maine

Maine’s draft bill introduced March 7, 2017 under emergency procedures, would have established a Commission to Study Using Blockchain Technology in Conjunction with Paper Ballots in Maine Elections.[54] The proposed Commission’s purpose was to “study the potential uses for blockchain technology to support and enhance Maine’s current paper ballot election system for the purpose of improving paper ballot security, increasing election transparency and reducing costs.”[55] The bill failed at first vote.

Nevada

On March 20, 2017, the Nevada Senate introduced Bill 398, a bill with a high likelihood of passing at least the first committee vote.[56] It amends NRS Title 59 relating to electronic transactions and provides for recognition of validity of blockchain records, blockchain-enabled electronic signatures, and smart contracts. More interestingly, the act prohibits taxation or regulation of blockchain or smart contracts, including through licensing, permits and certifications.[57] This is the opposite of the New York and California approach and may reflect Nevada’s pro-business ambitions along continuing attempts to compete with Delaware as incorporation jurisdiction of choice.

New York

Although New York did not enact state-wide legislation recognizing blockchain for record-keeping purposes, in June 2015 it became the first state in the U.S. to regulate virtual currency companies[58] through state agency rulemaking.[59] Entities engaging in virtual currency business not covered by an exemption from New York’s virtual currency rules must obtain a BitLicense from New York’s Department of Financial Services.[60] In almost two years, exactly three such licenses were granted.[61] New York requires virtual currency businesses to hold both BitLicenses and money transmission licenses (MTAs), further increasing regulatory burden on smaller companies and prompting start-ups unable to comply to withdraw from operations in New York.[62]

Vermont

Vermont gained considerable tech media attention on June 13, 2015, when then-Governor Peter Shumlin signed into law Act 51, “An act relating to promoting economic development.”[63] The Act contained a section, titled “Study and Report; Blockchain Technology,” mandating a report on “recommendations on the potential opportunities and risks of creating a presumption of validity for electronic facts and records that employ blockchain technology.”[64] Vermont was rumored to contemplate switching to blockchain-based public record keeping. But the January 15, 2016 report[65] quelled the tech community’s excitement[66] with findings such as: “[i]n light of the very limited possible benefits and the likely significant costs for either entering into a private or public blockchain or setting up a state-operated blockchain, at this time, blockchain technology would be of limited value in conducting state business.”[67] This damning assessment appears to have indefinitely tabled prospects of distributed ledger public record keeping.[68]

In June 2016, Vermont passed “An act relating to miscellaneous economic development provisions,”[69] adding an entire section[70] on recognizing validity of blockchain records and their admissibility in courts as evidence without need for authentication:[71] “A digital record electronically registered in a blockchain shall be self-authenticating pursuant to Vermont Rule of Evidence 902, if it is accompanied by a written declaration of a qualified person, made under oath…”[72]

Although far from the revolutionary switch to distributed ledger public records discussed in tech media, the bill was unique in explicitly affirming the evidentiary value of blockchain records. The relevance of blockchain records in Vermont judicial proceedings remains to be seen. While no financial center or cutting-edge jurisdiction, Vermont’s precedent might be adopted by New York, California, and especially Delaware, where such evidence is more likely to be used.

IV. Implications

Delaware’s however hypothetical project to maintain corporate records by distributed ledger, Vermont’s bill on authentication and evidentiary value of blockchain, and Arizona’s recognition of smart contracts present new trends for states employing originally purely financial technology for legal purposes. While regulation by restriction or prohibition of blockchain-based virtual currencies was a given in the financial industry’s expansive regulatory environment, state recognition of blockchain’s value for authentication of title to personal and real property constitutes a step towards incorporation of distributed ledger technology into the legal sphere, where “code is law” might adopt a literal meaning.

Blockchain has downsides: besides the cost and technical difficulties of implementing distributed ledger record-keeping, risks of network hacking or fraudulently obtaining private or public keys could jeopardize this record system and all blockchains contained therein.[73] The facial anonymity of users may facilitate money laundering and terrorist financing, a main regulatory concern with virtual currencies. Widespread blockchain network access could contribute to herding behavior and increase market volatility under financial system stress,[74] while “kill switches” similar to those used by stock exchanges to prevent market collapse when share prices plummet may prove near-impossible to implement across distributed ledgers.

But overall, the increased robustness and security of blockchain record keeping is difficult to match by existing technologies. Especially for corporate record keeping, the time has come to fix a system failing to serve its purpose, as demonstrated by the Dole Foods share incident and others with similarly unresolvable shareholder identification issues.

V. Conclusion

The most important developments for blockchain’s regulation and implementation in an evidentiary context occurred in Arizona (recognition of smart contracts), Vermont (blockchain as evidence), Chicago (real estate records), and, most importantly, Delaware (pending initiative authorizing registration of shares of Delaware companies in blockchain form). Since 64 percent of Fortune 500 companies and over 1 million entities are incorporated there,[75] the Delaware initiative’s enactment will change regulatory landscape for securities by setting precedent in the most important corporate jurisdiction of the U.S. Other states competing for corporate taxes and fees would be sure to follow.

[17] World Gov’t Summit, Building the Hyperconnected Future on Blockchains (Feb. 2017), http://internetofagreements.com/files/WorldGovernmentSummit-Dubai2017.pdf; see also Vinay Gupta, The Promise of Blockchain Is a World Without Middlemen, Harv. Bus. Rev. (Mar. 6, 2017), https://hbr.org/2017/03/the-promise-of-blockchain-is-a-world-without-middlemen.

Neither the Federal Government nor any State or political subdivision thereof shall impose any statutory restrictions or regulations specifically identifying and governing the creation, use, exploitation, possession or transfer of any algorithmic protocols governing the operation of any virtual, non-physical, algorithm or computer source code-based medium for exchange (collectively, “cryptocurrency” as defined herein) for a period beginning June 1, 2015, and extending five years after the enactment of this Act (such period, the “moratorium period”), except for statutes already enacted and effective prior to the date of enactment of this Act, and further suspending the enactment and effectiveness of any and all pending statutes and regulations until the end of the aforementioned moratorium period, except as otherwise provided in this section.

[29] “Whereas blockchain technology with the appropriate protections has the potential to fundamentally change the manner in which trust and security are established in online transactions through various potential applications in sectors including financial services, payments, health care, energy, property management, and intellectual property management.” Id., Preamble.

[30]See U.S. Const. art. VI, cl. 2 (“This Constitution, and the laws of the United States which shall be made in pursuance thereof; and all treaties made, or which shall be made, under the authority of the United States, shall be the supreme law of the land; and the judges in every state shall be bound thereby, anything in the Constitution or laws of any State to the contrary notwithstanding.”). Furthermore, federal law preempts conflicting state law. See, e.g., Maryland v. Louisiana, 451 U. S. 725, 746 (1981) (“Consistent with that command, we have long recognized that state laws that conflict with federal law are ‘without effect.’”).

“Blockchain technology” means distributed ledger technology that uses a distributed, decentralized, shared and replicated ledger, which may be public or private, permissioned or permissionless, or driven by tokenized crypto economics or tokenless. The data on the ledger is protected with cryptography, is immutable and auditable and provides an uncensored truth. 2. “Smart contract” means an event-driven program, with state, that runs on a distributed, decentralized, shared and replicated ledger and that can take custody over and instruct transfer of assets on that ledger.

[51] H.R.J. Res. 25, 100th Gen. Assemb., Reg. Sess. (Ill. 2017) (creating the “Illinois Legislative Blockchain and Distributed Ledger Task Force to study how and if State, county, and municipal governments can benefit from a transition to a blockchain based system for recordkeeping and service delivery.”).

[61] Despite 22 initial filings, Circle, Ripple, and Coinbase were the only three companies holding a New York BitLicense in January 2017. See Michael del Castillo, Bitcoin Exchange Coinbase Receives New York BitLicense, Coindesk (Jan. 17, 2017), http://www.coindesk.com/bitcoin-exchange-coinbase-receives-bitlicense/

[62] Yessi Bello Perez, The Real Cost of Applying for a New York BitLicense, Coindesk (Aug. 13, 2015), http://www.coindesk.com/real-cost-applying-new-york-bitlicense/

]]>4489Open Source Software and Standards Development Organizations: Symbiotic Functions in the Innovation Equationhttp://stlr.org/2017/02/20/open-source-software-and-standards-development-organizations-symbiotic-functions-in-the-innovation-equation/
Mon, 20 Feb 2017 16:01:41 +0000http://blogs2.law.columbia.edu/stlr/?p=4363Continue Reading →]]>Editor’s Note: This post was written by guest contributor David J. Kappos, a current partner at Cravath, Swaine & Moore LLP, and former Director of the United States Patent and Trademark Office. Before heading up the USPTO, Mr. Kappos was a Vice President and Assistant General Counsel (focusing on IP issues) for IBM.

Two groups—industry standards development organizations (SDOs) and the open source software (OSS) community—have contributed enormously to the breathtaking technological achievements of recent decades that permit anyone almost anywhere in the world to catch a Pokémon on a $100 smart-phone. SDOs have been remarkable stewards of this innovation, developing principles and processes of self-governance, such as FRAND (“fair, reasonable and non-discriminatory”) licensing, as well as catalyzing the inclusion of the best available technology from their applicable fields through the standards they set. Meanwhile, the OSS community, with its strong ethos of sharing and transparency, has accelerated the pace of software innovation. However, the intersection of their jurisdictions, OSS embedded in standards, has become a contentious subject. Some critics now question the compatibility of OSS with FRAND licensing, arguing instead that standards using OSS should be royalty-free.

As SDOs consider the interaction between OSS licensing models and FRAND terms, it is important to recognize that both OSS and standards are good for innovation, can and do coexist with the right choice of license, and indeed complement one another. SDOs should not lightly undertake modifications to their policies and practices that are unnecessary, and will likely have serious negative repercussions on the quality of technology contributed to their standards.

Standards and Open Source Software Both Advance Innovation

An important precipitating factor in the recent wave of innovation has been the creation and adoption of industry standards. In the telecommunications industry, widely adopted, highly innovative standards such as 3G and 4G have created vastly improved technical capabilities. The technology behind these standards is protected by standard essential patents (SEPs), which are accepted into a technical standard by SDOs. An important balance has long been maintained by leading SDOs such as the European Telecommunications Standards Institute (ETSI) and the International Telecommunication Union (ITU), recognizing the need to reward innovators by compensating them for giving access to their patented inventions, while also recognizing the need to make standardized and interoperable technology that requires such inventive contributions available for implementers of standards to use at reasonable cost.[1] As evidenced by the enormous technical advances in standardized technology in fields like mobile telecommunications, the standardization process based on FRAND licensing has served and is serving humanity well, providing huge consumer value through both innovation and reasonably priced products.

Open Source Software also provides efficiencies and network effects crucial to innovation. Unlike proprietary software, OSS gives developers access to the source code of computer programs developed by others working on a given open source project, and enables developer communities to share tools and build on common infrastructure. In recent years, OSS has been critical in shaping cloud computing, big data and mobile technology.[2] The community-based development process for OSS has also allowed it to organically develop a cohesive social network.[3] This social element has been an important driver in the adoption of OSS by industry. In order to take advantage of
OSS-enabled technological infrastructure in their own products and services, commercial entities have had to adapt their internal processes to comply with the software licenses and other requirements of Open Source communities, as well as provide funding and engineering talent to contribute to—and even to lead—Open Source projects.

Properly managed, companies engender goodwill with programmers and customers through investments in open source[4] and reduce development and maintenance expense by sharing software costs across the applicable open source community, redeploying the saved funds on more investments in innovation rather than recreating duplicative infrastructure, while customers enjoy highly innovative, stable, low-cost software.

FRAND Works, for Both Patents and OSS

SDOs have long required that members agree to the FRAND system of licensing in order to participate in the standard-development process. The “fair, reasonable and non-discriminatory” tenets of FRAND require SEP holders to abide by licensing terms that are pro-competitive, include reasonable terms and conditions, and treat similarly situated licensees similarly. SDO guidelines historically also accommodate licensing of software generally (including OSS) under FRAND principles, rendering the two systems compatible by definition.[5]

For a number of years some critics argued that FRAND was “broken”, and the “monopolies” conferred by SEPs would result in “patent holdup” and “royalty stacking” as SEP holders exploited the sunk costs of standards implementers. However, as FRAND-based industries like mobile telecommunications have matured and large bodies of data have become available showing the actual economics of the industry over 20-plus years, empirical studies and other scholarly works have sharply refuted the earlier dire predictions.[6] There is now no credible, current scholarship informed by the data, finding any issue with FRAND licensing in the standards development context. In fact, industries like mobile telecommunications are thriving under the FRAND licensing regime. Indeed cries of patent holdup and royalty stacking are making their way to the place where well-intentioned, seemingly plausible academic theories go once confronted by massively incompatible marketplace data.[7]

In the meantime, critics have more recently begun airing a new argument: that FRAND is discriminatory towards OSS and inherently incompatible with OSS.[8] As was the case with hold-up and royalty-stacking theories, there is no real-world indication of any incompatibility or discrimination, and the predominant view is that the marketplace has developed solutions to incorporating FRAND principles in licensing OSS.[9]

Open Source Software Is Compatible with FRAND

Contributors and SDOs are readily able to ensure that OSS contributions are compatible with FRAND by simply choosing compatible OSS licenses for contributions.

The Open Source Initiative (OSI), a standards body of sorts and arbiter of the “open source definition”, lists over 70 different licenses that have been reviewed and approved under its License Review Process.[10] Broadly speaking, these licenses fall into one of two categories, permissive or copyleft. Copyleft licenses require the licensed software and any modifications to be redistributed with the same set of rights (i.e., under the same copyleft license), thus preventing the software from becoming proprietary.[11] Claims of incompatibility of open source licenses with FRAND licensing predominantly stem from copyleft licenses and the conflation of open source software with free software. The original copyleft license, the General Public License (GPL), was designed by Richard Stallman, founder of the Free Software Foundation, an organization that continues to advocate for free software.[12]

Permissive licenses, on the other hand, do not place restrictive terms on software redistribution, providing an opportunity for innovators to benefit financially from their modifications to applicable open source software. The only requirements accompanying redistribution under a Berkeley Software Distribution (BSD) license, for instance, are to provide the copyright notice, reproduce the license language and refrain from using the original software developer’s name in any derivative works without written permission.[13] Other popular permissive licenses include the MIT license and the ISC license.

A few other licenses can be categorized as neither permissive nor copyleft. For example, the Apache 2.0 license does not require distribution under the same license for any modifications or derivative works, but does require distribution under the same license for any unmodified components. Apache 2.0 also differs from many permissive licenses in its grant of a royalty-free patent license.[14] The Apache License’s grant of a royalty-free patent license on all contributions to Apache licensed software does conflict with FRAND principles, because it does not give innovators an avenue for fair compensation.[15] However, this problem only arises from a small subset of OSS licenses.

Permissive licenses account for the vast majority of OSI’s approved licenses and are fully compatible with FRAND licensing. An empirical study of all available OSI approved licenses in 2011 showed that only two of the eight most popular OSS licenses and seven of the 67 then approved OSS licenses had terms conflicting with FRAND.[16] These statistics flatly contradict any contention that OSS cannot be reconciled with FRAND. To the contrary, OSS is readily compatible with FRAND by simply choosing a permissive open source license for code submitted to standards bodies developing FRAND standards.

False Conflicts Created by Those Seeking Short Term Economic Gain Must Be Managed for What They Are: Unacceptable

SEPs and OSS spur innovation, both together and separately. Moreover, there are clearly many viable open source licenses that allow SDOs to utilize OSS without a resulting conflict between the open source license and the FRAND license. So why are critics claiming the systems are incompatible?

The answer is partly ideological, but mostly about business models. The ideological component is driven by the free software movement, which sprang from the early “hacker” culture of software engineering and has advocated for free software since the early 1980s. The free software community opposes any royalty-based licensing or proprietary software on principle and believes software developers should instead seek economic incentives through warranties, maintenance or other non-royalty based channels.[17] Many of the copyleft licenses incompatible with FRAND licensing were developed within this community at times, some have argued, explicitly to frustrate FRAND licensing.[18]

However, another potent force in propagating the myth of FRAND and OSS incompatibility has been interested parties who seek to reduce their licensing costs. The ubiquity of OSS in standards means that any policy removing OSS components from being factored into a royalty-bearing license would significantly reduce implementer component costs and thus improve the bottom line for implementers.[19] This is a natural competitive point of view and not objectionable per se—every implementer of technology wants to reduce its input costs. But for those SDOs seeking to maintain the delicate balance that encourages innovators to contribute cutting edge technology, the gambit must be taken for what it is—economic self-interest by those seeking access to others’ innovation investments for free. To take this bait will inevitably drive innovators away and leave standards to the moribund contributions of those who don’t, or can’t, innovate.

Some see OSS as the next opportunity to devalue SEPs after successfully pushing through the controversial amendments to the Institute of Electrical and Electronics Engineers’ (IEEE) patent policy.[20] A major change, and one vehemently protested by SEP holders, was the prohibition of SEP holders from seeking an injunction against infringers until after first-level appellate review has been concluded, significantly tilting the balance between innovators and implementers, and emboldening implementers to infringe the patents of innovators rather than taking licenses.[21] While the amendment’s long-term effects remain to be seen, there is widespread concern that cheapening the value of SEPs will result in less investment in and development of effective standards.

Likewise, amending SDO policies to require the use of FRAND-incompatible OSS licenses could also result in less innovative standards and a diminished industry role for the implenting SDO. When SDOs are considering specific software submissions for inclusion in standards, it is natural that software associated with highly innovative features will include proprietary licenses. Furthermore, many of these software submissions will be adjunct to highly innovative hardware, circuitry or algorithms. Insistence on a FRAND-incompatible license will prevent the adoption of both highly innovative software and its associated technology, sending a message that the SDO is willing to prioritize “free” over innovation.

The answer to the false choice between OSS and FRAND in standard development is simple: continue to allow, as has historically been the practice, contributors of OSS to make their contributions under permissive open source licenses. To those seeking to create an innovation-hostile climate in SDO operations by forcing software under copy-left licenses or licenses with royalty-free patent grants, just say no.

When Open Means Closed

The recent press to weaken innovation incentives in standards development by changing the approach to accepting OSS code comes cloaked in pleasantries like “open” and “free”. Policymakers, however, should remain wary. Terms like “open standards”, “free” and “sharing” evoke egalitarian ideals that belie a more complicated truth. The current system has been remarkably successful in balancing the needs of OSS users and developers with the interests of SEP holders through appropriate permissive licenses.

In contrast, moving to incompatible licenses for FRAND standards submissions weakens innovation incentives and discourages innovators from participating in standardization efforts. For standards development organizations, this means abdicating technical leadership to those who prioritize commodity implementations above innovative standards.[22]

Before SDOs change their historically successful policies on the treatment of OSS in standards, they should consider the evidence of whether their policies are actually broken. The kid with the $100 smartphone playing Pokémon Go would say, “probably not.”

[1] In contrast, the Institute of Electric and Electronics Engineers, an SDO which previously developed one of the most successful standards of all time (802.11 or WiFi) under policies very similar to those of ETSI and ITU, made drastic policy changes in 2015, systematically ignoring the concerns of patent holders. Its policy will likely impact the willingness of innovators to contribute leading-edge technology to IEEE standards, versus standards development efforts of other standards bodies that have maintained a balance to encourage contribution of the best available technology as well as affordable license rates for implementers.

[8] Jay P. Kesan, The Fallacy of OSS Discrimination by FRAND Licensing: An Empirical Analysis, 29 (Illinois Public Law and Legal Theory Research Papers Series No. 10-14, 2011).

[9] Benoit Muller, Annex V: Views and Trends with Respect to Standards and IPRs, Study on the Interplay between Standards and Intellectual Property Rights (IPRs), Tender No ENTR/09/015, Final Report, April 2011, available at: http://www.iplytics.com/download/docs/studies/ipr_study_final_report_en.pdf.

[11] Making matters more complicated, OSS contributions can implicate non-SEPs such as patents that may be used (but are not mandatory) to facilitate implementation of a particular standard. Given this likelihood, the contribution of OSS under restrictive copyleft licenses can compel innovators to relinquish innovations beyond those essential to the standard, forcefully discouraging contributions at all levels for fear of having valuable investments reduced to giveaways.

[22] In addition, impairing the FRAND paradigm would shift the economics of innovation away from a patent disclosure-based regime to favor a trade-secret based regime. A trade secret regime with its barriers to sharing can cause secrecy-shrouded exclusivity in perpetuity and tragically inefficient allocation of resources, a giant step backwards for innovation and the downfall of standards development, which relies so heavily on disclosure, transparency and sharing.

If you are looking for clarity on what qualifies as a “Covered Business Method” for review under Section 18 of the AIA (America Invents Act), the PTAB’s (Patent Trial and Appeal Board) decisions offer little guidance. Congress established the PTAB to provide a more effective, efficient, and consistent review of issued patents. But the PTAB has a ways to go as far as consistency is concerned. Case in point: The PTAB’s denial of CBM review in four related CBMs styled Par Pharmaceutical, Inc. v. Jazz Pharmaceuticals, Inc.[1]

Section 18 of the AIA governs the transitional program for “covered business method patent” reviews. Section 18(a)(1)(E) states that a transitional proceeding may be instituted only for a “covered business method patent,” which is “[1] a patent that claims a method or corresponding apparatus for performing data processing or other operations used in the practice, administration, or management of a financial product or service, except that [2] the term does not include patents for technological inventions.”

Par, the first-ever pharmaceutical-related CBM challenge, joins the short list of rejected challenges under the first [1] “financial product” test of section 18(a)(1)(E). The PTAB panel denied institution of four related CBM challenges. All of the patents relate to distributing a prescription drug via the checking, controlling, shipping, and mailing of prescription drugs for a fee.[2] According to the panel, the claims “do not recite a product or service particular to or characteristic of financial institutions such as banks, insurance companies, and investment houses.”[3]

Previously, however, a different panel emphasized in the PTAB’s first decision to institute a trial of a CBM patent (“the SAP decision”) that “the legislative history explained that the definition of covered business method patents supported the notion that the definition be broadly interpreted and encompass patents claiming activities that are financial in nature, incidental to a financial activity or complementary to a financial activity.”[4] In the same decision, the panel added that “[t]he Office also . . . did not adopt the suggestion that the term financial product or service be limited to the products or services of the financial services industry as it ran contrary to the intent behind § 18(d)(1).”[5] In line with the legislative history, the panel refused to adopt a definition limiting financial services or products to a particular industry, financial services industry, because this was considered but not adopted during rulemaking. As such, a narrow construction would be contrary to the legislative history of Section 18:

We do not interpret the statute as requiring the literal recitation of the terms financial products or services. The term financial is an adjective that simply means relating to monetary matters. This definition is consistent with the legislative history for Section 18, which explains that the definition was intended to encompass patents claiming activities incidental and complementary to a financial activity. We hold that [the] patent claims methods and products for determining a price and that these claims, which are complementary to a financial activity and relate to monetary matters, are considered financial products and services under § 18(d)(1).[6]

In contrast, the panel in Par rejected CBM petitions because the Petitioner did “not analyze the claim language, in detail and in context, to explain how the claim language recites method steps involving the movement of money or extension of credit in exchange for a product or service . . . .”[7] This requirement, however, is not found in Section 18, its legislative history, or the Office’s related rulemaking history. It also contradicts earlier decisions interpreting Section 18, as reflected by the SAP decision.

This isn’t the first time the PTAB has denied patents CBM review for failing to meet the standards, but it is notable because the patents in Par are classified in class 705, the so-called “sweet spot” of CBM patents.[8] In the handful of decisions that have denied institution because the challenged patent is not a CBM, the respective panel denied the petition because the claims failed the “financial product” test,[9] or failed the “technical invention” test.[10] But no panel has denied a petition against a patent classified in Class 705.

The PTAB continues to develop a body of decisions interpreting Section 18. Because its decisions are made when deciding institution, the PTAB is the sole interpreter of what qualifies for CBM review. If the Federal Circuit holds that such decisions are entirely unappealable, the risk of inconsistent judgments remains. In the meantime, practitioners must be mindful of the different applications and potential different interpretations of Section 18 as they craft CBM petitions and prepare responses to petitions.

]]>3522Guest Post: Confidence in Intervals and Diffidence in the Courtshttp://stlr.org/2012/05/08/guest-post-confidence-in-intervals-and-diffidence-in-the-courts/
http://stlr.org/2012/05/08/guest-post-confidence-in-intervals-and-diffidence-in-the-courts/#commentsTue, 08 May 2012 18:32:38 +0000http://www.stlr.org/?p=1853Continue Reading →]]>This guest post comes to the STLR Blog from CLS Lecturer-in-Law Nathan A. Schachtman. He blogs regularly at http://schachtmanlaw.com/blog/. This post was originally published at that site and is available here.

Next year, the Supreme Court’s Daubert decision will turn 20. The decision, in interpreting Federal Rule of Evidence 702, dramatically changed the landscape of expert witness testimony. Still, there are many who would turn the clock back to disabling the gatekeeping function. In past posts, I have identified scholars, such as Erica Beecher-Monas and the late Margaret Berger, who tried to eviscerate judicial gatekeeping. Recently a student note argued for the complete abandonment of all judicial control of expert witness testimony. See Note, “Admitting Doubt: A New Standard for Scientific Evidence,” 123 Harv. L. Rev. 2021 (2010)(arguing that courts should admit all relevant evidence).

One advantage that comes from requiring trial courts to serve as gatekeepers is that the expert witnesses’ reasoning is approved or disapproved in an open, transparent, and rational way. Trial courts subject themselves to public scrutiny in a way that jury decision making does not permit. The critics of Daubert often engage in a cynical attempt to remove all controls over expert witnesses in order to empower juries to act on their populist passions and prejudices. When courts misinterpret statistical and scientific evidence, there is some hope of changing subsequent decisions by pointing out their errors. Jury errors on the other hand, unless they involve determinations of issues for which there were “no evidence,” are immune to institutional criticism or correction.

Despite my whining, not all courts butcher statistical concepts. There are many astute judges out there who see error and call it error. Take for instance, the trial judge who was confronted with this typical argument:

“While Giles admits that a p-value of .15 is three times higher than what scientists generally consider statistically significant—that is, a p-value of .05 or lower—she maintains that this ‘‘represents 85% certainty, which meets any conceivable concept of preponderance of the evidence.’’ (Doc. 103 at 16).”

Giles v. Wyeth, Inc., 500 F.Supp. 2d 1048, 1056-57 (S.D.Ill. 2007), aff’d, 556 F.3d 596 (7th Cir. 2009). Despite having case law cited to it (such as In re Ephedra), the trial court looked to the Reference Manual on Scientific Evidence, a resource that seems to be ignored by many federal judges, and rejected the bogus argument. Unfortunately, the lawyers who made the bogus argument still are licensed, and at large, to incite the same error in other cases.

This business perhaps would be amenable to an empirical analysis. An enterprising sociologist of the law could conduct some survey research on the science and math training of the federal judiciary, on whether the federal judges have read chapters of the Reference Manual before deciding cases involving statistics or science, and whether federal judges expressed the need for further education. This survey evidence could be capped by an analysis of the prevalence of certain kinds of basic errors, such as the transpositional fallacy committed by so many judges (but decisively rejected in the Giles case). Perhaps such an empirical analysis would advance our understanding whether we need specialty science courts.

One of the reasons that the Reference Manual on Scientific Evidence is worthy of so much critical attention is that the volume has the imprimatur of the Federal Judicial Center, and now the National Academies of Science. Putting aside the idiosyncratic chapter by the late Professor Berger, the Manual clearly present guidance on many important issues. To be sure, there are gaps, inconsistencies, and mistakes, but the statistics chapter should be a must-read for federal (and state) judges.

Unfortunately, the Manual has competition from lesser authors whose work obscures, misleads, and confuses important issues. Consider an article by two would-be expert witnesses, who testify for plaintiffs, and confidently misstate the meaning of a confidence interval:

“Thus, a RR [relative risk] of 1.8 with a confidence interval of 1.3 to 2.9 could very likely represent a true RR of greater than 2.0, and as high as 2.9 in 95 out of 100 repeated trials.”

Richard W. Clapp & David Ozonoff, “Environment and Health: Vital Intersection or Contested Territory?” 30 Am. J. L. & Med. 189, 210 (2004). This misstatement was then cited and quoted with obvious approval by Professor Beecher-Monas, in her text on scientific evidence. Erica Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process 60-61 n. 17 (2007). Beecher-Monas goes on, however, to argue that confidence interval coefficients are not the same as burdens of proof, but then implies that scientific standards of proof are different from the legal preponderance of the evidence. She provides no citation or support for the higher burden of scientific proof:

“Some commentators have attributed the causation conundrum in the courts to the differing burdens of proof in science and law.28 In law, the civil standard of ‘more probable than not’ is often characterized as a probability greater than 50 percent.29 In science, on the other hand, the most widely used standard is a 95 percent confidence interval (corresponding to a 5 percent level of significance, or p-level).30 Both sound like probabilistic assessment. As a result, the argument goes, civil judges should not exclude scientific testimony that fails scientific validity standards because the civil legal standards are much lower. The transliteration of the ‘more probable than not’ standard of civil factfinding into a quantitative threshold of statistical evidence is misconceived. The legal and scientific standards are fundamentally different. They have different goals and different measures. Therefore, one cannot justifiably argue that evidence failing to meet the scientific standards nonetheless should be admissible because the scientific standards are too high for preponderance determinations.”

Id. at 65. This seems to be on the right track, although Beecher-Monas does not state clearly whether she subscribes to the notion that the burdens of proof in science and law differ. The argument then takes a wrong turn:

“Equating confidence intervals with burdens of persuasion is simply incoherent. The goal of the scientific standard – the 95 percent confidence interval – is to avoid claiming an effect when there is none (i.e., a false positive).31“

Id. at 66. But this is crazy error; confidence intervals are not burdens of persuasion, legal or scientific. Beecher-Monas is not, however, content to leave this alone:

“Scientists using a 95 percent confidence interval are making a prediction about the results being due to somethingother than chance.”

Id. at 66 (emphasis added). Other than chance? Well this implies causality, as well as bias and confounding, but the confidence interval, like the p-value, addresses only random or sampling error. Beecher-Monas’s error is neither random nor scientific. Indeed, she perpetuates the same error committed by the Fifth Circuit in a frequently cited Bendectin case, which interpreted the confidence interval as resolving questions of the role of matters “other than chance,” such as bias and confounding. Brock v. Merrill Dow Pharmaceuticals, Inc., 874 F.2d 307, 311-12 (5th Cir. 1989)(“Fortunately, we do not have to resolve any of the above questions [as to bias and confounding], since the studies presented to us incorporate the possibility of these factors by the use of a confidence interval.”)(emphasis in original). See, e.g., David H. Kaye, David E. Bernstein, and Jennifer L. Mnookin, The New Wigmore – A Treatise on Evidence: Expert Evidence § 12.6.4, at 546 (2d ed. 2011) Michael O. Finkelstein, Basic Concepts of Probability and Statistics in theLaw 86-87 (2009)(criticizing the overinterpretation of confidence intervals by the Brock court).

Clapp, Ozonoff, and Beecher-Monas are not alone in offering bad advice to judges who must help resolve statistical issues. Déirdre Dwyer, a prominent scholar of expert evidence in the United Kingdom, manages to bundle up the transpositional fallacy and a misstatement of the meaning of the confidence interval into one succinct exposition:

“By convention, scientists require a 95 per cent probability that a finding is not due to chance alone. The risk ratio (e.g. ‘2.2’) represents a mean figure. The actual risk has a 95 per cent probability of lying somewhere between upper and lower limits (e.g. 2.2 ±0.3, which equals a risk somewhere between 1.9 and 2.5) (the ‘confidence interval’).”

Of course, Clapp, Ozonoff, Beecher-Monas, and Dwyer build upon a long tradition of academics’ giving errant advice to judges on this very issue. See, e.g., Christopher B. Mueller, “Daubert Asks the Right Questions: Now Appellate Courts Should Help Find the Right Answers,” 33 Seton Hall L. Rev. 987, 997 (2003)(describing the 95% confidence interval as “the range of outcomes that would be expected to occur by chance no more than five percent of the time”); Arthur H. Bryant &Alexander A. Reinert, “The Legal System’s Use of Epidemiology,” 87 Judicature 12, 19 (2003)(“The confidence interval is intended to provide a range of values within which, at a specified level of certainty, the magnitude of association lies.”) (incorrectly citing the first edition of Rothman & Greenland, Modern Epidemiology 190 (Philadelphia 1998); John M. Conley & David W. Peterson, “The Science of Gatekeeping: The Federal Judicial Center’s New Reference Manual on Scientific Evidence,” 74 N.C.L.Rev. 1183, 1212 n.172 (1996)(“a 95% confidence interval … means that we can be 95% certain that the true population average lies within that range”).

Who has prevailed? The statistically correct authors of the statistics chapter of the Reference Manual on Scientific Evidence, or the errant commentators? It would be good to have some empirical evidence to help evaluate the judiciary’s competence. Here are some cases, many drawn from the Manual‘s discussions, arranged chronologically, before and after the first appearance of the Manual:

Before First Edition of the Reference Manual on Scientific Evidence:

DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941, 948 (3d Cir. 1990)(“A 95% confidence interval is constructed with enough width so that one can be confident that it is only 5% likely that the relative risk attained would have occurred if the true parameter, i.e., the actual unknown relationship between the two studied variables, were outside the confidence interval. If a 95% confidence interval thus contains ’1′, or the null hypothesis, then a researcher cannot say that the results are ‘statistically significant’, that is, that the null hypothesis has been disproved at a .05 level of significance.”)(internal citations omitted)(citing in part, D. Barnes & J. Conley, Statistical Evidence in Litigation § 3.15, at 107 (1986), as defining a CI as “a limit above or below or a range around the sample mean, beyond which the true population is unlikely to fall”).

United States ex rel. Free v. Peters, 806 F. Supp. 705, 713 n.6 (N.D. Ill. 1992) (“A 99% confidence interval, for instance, is an indication that if we repeated our measurement 100 times under identical conditions, 99 times out of 100 the point estimate derived from the repeated experimentation will fall within the initial interval estimate … .”), rev’d in part, 12 F.3d 700 (7th Cir. 1993)

SmithKline Beecham Corp. v. Apotex Corp., 247 F.Supp.2d 1011, 1037-38 (N.D. Ill. 2003)(“the probability that the true value was between 3 percent and 7 percent, that is, within two standard deviations of the mean estimate, would be 95 percent”)(also confusing attained significance probability with posterior probability: “This need not be a fatal concession, since 95 percent (i.e., a 5 percent probability that the sign of the coefficient being tested would be observed in the test even if the true value of the sign was zero) is an arbitrary measure of statistical significance. This is especially so when the burden of persuasion on an issue is the undemanding ‘preponderance’ standard, which requires a confidence of only a mite over 50 percent. So recomputing Niemczyk’s estimates as significant only at the 80 or 85 percent level need not be thought to invalidate his findings.”), aff’d on other grounds, 403 F.3d 1331 (Fed. Cir. 2005)

In re Silicone Gel Breast Implants Prods. Liab. Litig, 318 F.Supp.2d 879, 897 (C.D. Cal. 2004) (interpreting a relative risk of 1.99, in a subgroup of women who had had polyurethane foam covered breast implants, with a 95% CI that ran from 0.5 to 8.0, to mean that “95 out of 100 a study of that type would yield a relative risk somewhere between on 0.5 and 8.0. This huge margin of error associated with the PUF-specific data (ranging from a potential finding that implants make a woman 50% less likely to develop breast cancer to a potential finding that they make her 800% more likely to develop breast cancer) render those findings meaningless for purposes of proving or disproving general causation in a court of law.”)(emphasis in original)

Eli Lilly & Co. v. Teva Pharms, USA, 2008 WL 2410420, *24 (S.D.Ind. 2008)(stating incorrectly that “95% percent of the time, the true mean value will be contained within the lower and upper limits of the confidence interval range”)

Benavidez v. City of Irving, 638 F.Supp. 2d 709, 720 (N.D. Tex. 2009)(interpreting a 90% CI to mean that “there is a 90% chance that the range surrounding the point estimate contains the truly accurate value.”)

Estate of George v. Vermont League of Cities and Towns, 993 A.2d 367, 378 n.12 (Vt. 2010)(erroneously describing a confidence interval to be a “range of values within which the results of a study sample would be likely to fall if the study were repeated numerous times”)

Correct Statements

There is no reason for any of these courts to have struggled so with the concept of statistical significance or of the confidence interval. These concepts are well elucidated in the Reference Manual on Scientific Evidence (RMSE):

“To begin with, ‘confidence’ is a term of art. The confidence level indicates the percentage of the time that intervals from repeated samples would cover the true value. The confidence level does not express the chance that repeated estimates would fall into the confidence interval.91

* * *

According to the frequentist theory of statistics, probability statements cannot be made about population characteristics: Probability statements apply to the behavior of samples. That is why the different term ‘confidence’ is used.”

RMSE 3d at 247 (2011).

Even before the Manual, many capable authors have tried to reach the judiciary to help them learn and apply statistical concepts more confidently. Professors Michael Finkelstein and Bruce Levin, of the Columbia University’s Law School and Mailman School of Public Health, respectively, have worked hard to educate lawyers and judges in the important concepts of statistical analyses:

“It is the confidence limits PL and PU that are random variables based on the sample data. Thus, a confidence interval (PL, PU ) is a random interval, which may or may not contain the population parameter P. The term ‘confidence’ derives from the fundamental property that, whatever the true value of P, the 95% confidence interval will contain P within its limits 95% of the time, or with 95% probability. This statement is made only with reference to the general property of confidence intervals and not to a probabilistic evaluation of its truth in any particular instance with realized values of PL and PU. “

Courts have no doubt been confused to some extent between the operational definition of a confidence interval and the role of the sample point estimate as an estimator of the population parameter. In some instances, the sample statistic may be the best estimate of the population parameter, but that estimate may be rather crummy because of the sampling error involved. See, e.g., Kenneth J. Rothman, Sander Greenland, Timothy L. Lash, Modern Epidemiology 158 (3d ed. 2008) (“Although a single confidence interval can be much more informative than a single P-value, it is subject to the misinterpretation that values inside the interval are equally compatible with the data, and all values outside it are equally incompatible. * * * A given confidence interval is only one of an infinite number of ranges nested within one another. Points nearer the center of these ranges are more compatible with the data than points farther away from the center.”); Nicholas P. Jewell, Statistics for Epidemiology 23 (2004)(“A popular interpretation of a confidence interval is that it provides values for the unknown population proportion that are ‘compatible’ with the observed data. But we must be careful not to fall into the trap of assuming that each value in the interval is equally compatible.”); Charles Poole, “Confidence Intervals Exclude Nothing,” 77 Am. J. Pub. Health 492, 493 (1987)(“It would be more useful to the thoughtful reader to acknowledge the great differences that exist among the p-values corresponding to the parameter values that lie within a confidence interval … .”).

Admittedly, I have given an impressionistic account, and I have used anecdotal methods, to explore the question whether the courts have improved in their statistical assessments in the 20 years since the Supreme Court decided Daubert. Many decisions go unreported, and perhaps many errors are cut off from the bench in the course of testimony or argument. I personally doubt that judges exercise greater care in their comments from the bench than they do in published opinions. Still, the quality of care exercised by the courts would be a worthy area of investigation by the Federal Judicial Center, or perhaps by other sociologists of the law.