We are dedicated to renewing America by continuing the quest to realize our nation's highest ideals, honestly confronting the challenges caused by rapid technological and social change, and seizing the opportunities those changes create.

Bugs in the System

A Primer on the Software Vulnerability Ecosystem and its Policy Implications

Policy Paper

July 28, 2016

In recent years, a seemingly endless string of massive data breaches in both the private and public sectors have been front-page news. Whether the target is a company like Sony or a government agency like OPM, such breaches are very often made possible by a software vulnerability – a “bug” in the system – that was unknown or left unaddressed by the target or its software vendor.

The existence of such vulnerabilities—or “vulns” for short—is unavoidable. Software is complex and humans are fallible, so vulnerabilities are bound to occur. Much of cybersecurity can be reduced to a constant race between the software developers and security experts trying to discover and patch vulnerabilities, and the attackers seeking to uncover and exploit those vulnerabilities. The question for policymakers is, what can they do to help speed the discovery and patching of vulnerabilities so that our computer systems—and therefore our economic stability, our national security, and consumers’ privacy—are safer? This paper is intended to be a primer on the vulnerability ecosystem for policymakers and advocates seeking to answer that question, describing what vulns are, who discovers them, who buys them, how and when they do (or don’t) get patched, and why.

There is a wide range of actors seeking to discover security flaws in software, whether to fix them, exploit them, or sell them to someone else who will fix or exploit them. These bug-hunters range from independent researchers, to small academic teams or security firms, to large tech companies working to improve their products, or even governments—including our own government, and other much less rights-respecting states—seeking to use these flaws for law enforcement or intelligence investigations. After finding a vulnerability, the discoverer has three basic options: not disclosing the vulnerability to the public or the software vendor; fully disclosing the vuln to the public, which in some cases may be the best way to get it patched but in others may leave users of the software dangerously exposed; and partial or “responsible” disclosure to the vendor so that they can fix the bug before it becomes public. Partial disclosure is often preferred because it can sometimes take months for a vendor to fix their product, and even longer for all the affected users to update their software to patch the security hole.

Confusing the issue of disclosure is the fact that there is a range of laws—such as the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and the Electronic Communications Privacy Act—that by their broad and vague terms arguably criminalize and create civil penalties for actions that security researchers routinely engage in while conducting legitimate security research. Unless reformed, these laws will continue to chill researchers’ disclosure of critical vulnerabilities, for fear that they will be sued or even thrown in jail.

Another disincentive to researchers’ disclosure of vulnerabilities so that they can be patched is the existence of open markets for vulnerabilities, where researchers can often get top dollar from criminal networks or governments seeking to exploit those vulnerabilities, or from intermediary agents who buy from researchers and then resell to criminals and states. Companies have responded by creating innovative vulnerability reward programs (VRPs), including “bug bounty” programs where they pay rewards for bugs that are submitted. Some of these programs also seek to reduce the legal chill on researchers by promising not to sue those who submit through these programs. It is sometimes difficult for these programs to compete with the much more lucrative open market, but they give researchers who want to help improve cybersecurity—and perhaps get a little cash or recognition for their discovery—a legitimate avenue to pursue.

Researchers often have a range of incentives to disclose their discoveries to someone, whether to the vendor, the public, or a buyer on the market. Governments, on the other hand, often have an incentive to withhold the vulns they buy or discover. Although they may want to keep the public and their own systems safe from bad guys exploiting those vulnerabilities, they also want to make use of them for a variety of purposes, from law enforcement to foreign intelligence surveillance, and the longer they are secret and unpatched, the longer they are useful. Governments have to weigh the security value of disclosure versus the benefit of stockpiling and using vulnerabilities for their own purposes.

In conclusion, we offer five initial policy recommendations to ensure that more vulnerabilities are discovered and patched sooner: (1) The U.S. government should minimize its participation in the vulnerability market, since it is the largest buyer in a market that discourages researchers from disclosing vulns to be patched; (2) The U.S. government should establish strong, clear procedures for government disclosure of the vulnerabilities it buys or discovers, with a heavy presumption toward disclosure; (3) Congress should establish clear rules of the road for government hacking in order to better protect cybersecurity and civil liberties; (4) Government and industry should support bug bounty programs as an alternative to the vulnerabilities market and investigate other innovative ways to foster the disclosure and prompt patching of vulnerabilities; and (5) Congress should reform computer crime and copyright laws, and agencies should modify their application of such laws, to reduce the legal chill on legitimate security research.