Good news for security researchers: The Library of Congress announced that it would renew and even expand protections for security testing under Section 1201 of the Digital Millennium Copyright Act (DMCA). Although we believe the security testing exemption could still use improvement, we applaud the Library of Congress’ continuing commitment to protecting good faith security research.

Background

Sec. 1201 of the DMCA prohibits circumventing technological protection measures (TPMs, like encryption, authentication requirements, region coding, etc.) to access copyrighted works, including software, without permission of the copyright holder. That creates criminal penalties and civil liability for independent security research that does not obtain authorization for each TPM circumvention from the copyright holders of software.

[For additional background on what DMCA Sec. 1201 is and why it’s important for security research, please see this earlier post.]

The Library of Congress created a three-year exemption for security testing to DMCA Sec. 1201 in 2015. The temporary exemption rule granted more protection to good faith security research than the DMCA itself. The exemption was up for renewal in 2018. The Copyright Office telegraphed that it would renew the 2015 security testing exemption, using a welcome new presumption of renewal for previously approved exemptions, so the renewal was expected.

Beyond renewal, several groups - including Rapid7 and our colleagues - recommended expanding the security testing exemption to provide good faith researchers with greater legal protection. Several other groups opposed expansion. The question of whether and how to expand the security testing exemption is what the Library of Congress considered over the last several months.

Expansion and rule

The Library of Congress chose to expand on its 2015 security testing exemption for DMCA Sec. 1201 in two ways:

The Librarian of Congress is removing the device limitation, so now software on more classes of computers can be tested - as long as the researcher owns the computer or has the authorization of the computer owner.

Previously the research exemption was limited to
"a) A device or machine primarily designed for use by individual consumers (including voting machines);
b) A motorized land vehicle; or
c) A medical device designed for whole or partial implantation in patients or a corresponding personal monitoring system, that is not and will not be used by patients or for patient care."

This expansion means researchers can circumvent technological protection measures on software for cybersecurity testing on devices that previously did not fit into those categories. However, before you go hacking your drone over a crowd in restricted airspace, check out the requirements in the rest of the rule (copied in full below) and remember that the exemption only applies to DMCA and not other laws.

The Librarian is removing "controlled" from the "controlled environment" limitation, so that it's just an environment designed to avoid harm.

Previously, the research exemption required
"where such activity is carried out in a controlled environment designed to avoid any harm to individuals or the public,"

As the Librarian explained, this change is intended to eliminate ambiguity on what qualifies as “controlled” and to better enable researchers to perform tests in “live” environments. But the environment must still be safe and, as before, the testing must be performed subject to the other requirements in the rule. Killing a car on a highway is still no-go. Heh.

Accordingly, the Acting Register recommends that the Librarian designate the following class:

(i) Computer programs, where the circumvention is undertaken on a lawfully acquired device or machine on which the computer program operates, or is undertaken on a computer, computer system, or computer network on which the computer program operates with the authorization of the owner or operator of such computer, computer system, or computer network, solely for the purpose of good-faith security research and does not violate any applicable law, including without limitation the Computer Fraud and Abuse Act of 1986.

(ii) For purposes of this paragraph (b)(11), “good-faith security research” means accessing a computer program solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, where such activity is carried out in an environment designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices or machines on which the computer program operates, or those who use such devices or machines, and is not used or maintained in a manner that facilitates copyright infringement.

Good, but room for improvement

There are other reforms that did not make the cut. In particular, scaling back the requirement that all other laws and regulations be obeyed in order to qualify for the research exemption - "and does not violate any applicable law, including without limitation the Computer Fraud and Abuse Act of 1986." Rapid7 and otherproponents pointed out that this “other laws” restriction creates legal uncertainty for researchers by making the exemption contingent on compliance with even obscure regulations - county electrical codes, for example - that have little to do with digital security or copyright.

Although that recommendation received support from DOJ and NTIA, in addition to the other exemption proponents, the Library of Congress declined to make this change. The Register of Copyrights noted the DMCA itself [at 17 USC 1201(j)(2)] includes this "all other laws" limitation. The Register also noted there wasn't enough of a record concretely demonstrating that security research was chilled because Sec. 1201 included this "other laws" requirement - not whether other broad laws chilled research, but only whether the addition of potential 1201 liability to those other laws chilled research. This is a high bar to meet in 2021.

Overall, however, this is a good outcome for security researchers. The Library of Congress once again demonstrated its understanding of the importance of independent cybersecurity research to society, and that security testing does not infringe on the exercise of copyright. The next opportunity for changing the temporary security testing exemption is in 2021, but in the meantime we join the Register of Copyrights in calling on Congress to consider legislative reforms to DMCA Sec. 1201 to protect good faith security researchers on a permanent basis.

Happy National Cybersecurity Awareness month, everyone! This festive blog post was drafted entirely in a pumpkin patch. I am lost here and it is cold.

One of the major contributors to increasing and improving cybersecurity awareness is research that identifies vulnerabilities in technology and discloses them to the technology manufacturers and users so they can understand and mitigate the risk. This process is called coordinated vulnerability disclosure and handling (or "CVD processes" for short), and is something Rapid7 has commented on many-a-time. If you're unfamiliar with CVD processes and why they're important for both organization security and researchers, please see this previous post.

In this post, we aim to distinguish between three broad flavors of CVD processes based on authorization, incentives, and resources required. We also urge wider adoption of foundational processes before moving to more advanced and resource-intensive processes. Here are three general categories of CVD processes:

1. Unsolicited: The organization's CVD process includes a channel for receiving unsolicited vulnerability disclosures and resources to respond to the disclosures, but the organization does not authorize or incentivize researchers to look for security vulnerabilities;

2. Authorized: The organization's CVD process does authorize researchers to look for security vulnerabilities, but does not offer rewards to researchers; and

What prompts this post?

First, I am waiting to be rescued from this nightmarish labyrinth of pumpkin vines and corn stalks before hypothermia sets in. Second, CVD processes keep coming up in policy discussions without recognition or awareness of the basics, which may be an impediment to wider adoption of CVD processes.

Policymakers are pushing agencies to run before they walk when it comes to CVD processes. Following the success of "Hack the Pentagon," legislators now want to "Hack DHS" and "Hack the State Dept."—requiring, by law, that particular government agencies invite researchers to find vulnerabilities in their systems. While this might ultimately be a useful exercise, it also risks rushing individual agencies to more complex and resource-intensive processes while failing to ensure the fundamentals are in place for all agencies. The mighty Katie Moussouris has spoken out repeatedly on this very issue, and her work on the levels of maturity for CVD processes has heavily influenced our views.

Two recent government reports on CVD demonstrate strong understanding of the issues. First, the US Dept. of Justice's helpful Framework for a Vulnerability Disclosure Program for Online Systems, a detailed resource for organizations establishing a CVD process that authorizes research into certain systems. Second, the US House Energy & Commerce Committee issued The Criticality of Coordinated Disclosure In Modern Cybersecurity—a great report that explicitly notes the crucial distinction between authorized disclosures and incentivized disclosures (i.e., bug bounties). However, neither report delves into foundational CVD processes and resource requirement issues highlighted in this post.

Foundational CVD process: Communication and assessment

At its most foundational, an organization's CVD process can be 1) a public point of contact for vulnerability disclosures, like security@organization.mail, and 2) a process and resources for reviewing and responding to disclosures, including mitigation and communicating with external stakeholders such as the vulnerability reporter. There is no explicit authorization for researchers to probe the organization, and therefore no guarantee of legal liability protection for researchers that discover vulnerabilities. This is how Rapid7's own CVD process is structured, and we believe this type of process should be a regular feature of organizations’ cybersecurity programs.

The vast majority of organizations (even among large global companies) and government agencies do not have a public-facing means for external parties to report security vulnerabilities. So there is clearly a gap in adoption of CVD processes, even at this fundamental level, into organizations' cybersecurity programs. And organizations with limited security resources and expertise—which is most organizations—may simply not be prepared for a heavier volume of vulnerability disclosures.

This foundational CVD setup keeps an organization's overhead manageable while still providing a clear channel for vulnerability disclosures to the right internal staff. One benefit for security researchers, despite not having clear liability protection, is that using the channel demonstrates they are acting in good faith, and communicating with personnel tasked with handling security issues should help minimize misunderstandings, conflict, and ignored reports.

One might consider this flavor of CVD process to be "basic" since the lack of incentives likely results in fewer total disclosures, but we should not underestimate the resources required to safely receive, assess, mitigate, and communicate about unsolicited security vulnerability disclosures. If broad adoption of CVD processes is the goal, resource constraints must be considered.

Next level: Authorized research

A more advanced step for organizations is to overtly authorize researchers to look for vulnerabilities in specified assets. This is the model of CVD process that DHS would be required to establish agency-wide under legislation (H.R. 6735) that recently passed the US House of Representatives, and now awaits passage in the Senate. This is also the CVD flavor thoroughly described in the Dept. of Justice’s report. The big benefit of authorization for researchers is clearer legal protection from anti-hacking laws like the Computer Fraud and Abuse Act, so long as the researcher stays within the bounds of the authorization.

This CVD approach also requires more resources from the organization to establish the program and deal with disclosures. Authorizing research likely boosts the number of researchers probing the organization's' assets, thereby boosting the number of vulnerability disclosures to the organization, requiring more resources to assess and address. In addition, as the Dept. of Justice's CVD report details, organizations must carefully identify which specific assets researchers are authorized to probe, what types of techniques (i.e., phishing, DDOS) are authorized, and clearly outline the responsibilities and expectations for researcher conduct.

And what about those assets that the organization does not identify as authorized for security research? The organization should still establish the more foundational model above and prepare to receive unsolicited vulnerability disclosures that apply to the organization’s other assets.

For organizations with sufficient familiarity and resources, this more intensive authorized CVD process may be a good fit and uncover more hidden issues that the organization can resolve. For organizations that are not sufficiently prepared, the volume of disclosures and degree of planning required might be overwhelming. As Moussouris said, "If they can’t handle known vulnerabilities, how are they going to fare when the focus of all these hackers is going to pile on them?”

Yes, the hackers will close in like rural darkness in late October.

Final boss: Incentivizing research

A third distinction goes beyond mere authorization and motivates research with some type of prize—hoodies, shout-outs in security bulletins, or cash money —for finding and disclosing vulnerabilities. "Bug bounties" fall under this category too. Some legislation—the "Hack DHS Act"—would require DHS to adopt a bug bounty on a pilot basis, though this legislation has not made as much progress as H.R. 6735, which uses the authorization model described above.

This model adds a reward system to the complexities and resource requirements of authorization. As you might anticipate, an even greater volume of disclosures is likely as researchers compete for those sweet hoodies, those fine shout-outs. The greater the volume of disclosures, the more resources required to evaluate and respond to them—and if an organization incentivizes research but fails to follow through on the rewards, the organization runs a greater risk of reputational harm than not having an incentives program in the first place.

Many incentives programs end after a designated period. Like the authorized CVD process above, organizations will still need a foundational CVD process in place to receive unsolicited disclosures about remaining vulnerabilities. Just as a bug bounty is not a replacement for a comprehensive organizational cybersecurity program, a bug bounty does not replace the need for a foundational CVD process for systems and assets outside the scope of the bug bounty.

Driving more adoption of fundamental processes

Rapid7 —and evidently the House Energy & Commerce Committee— believes a CVD process should be a standard component of security programs in companies and government agencies. A great place to start would be establishing foundational CVD processes in federal government agencies. Federal agencies must consider it anyway as foundational CVD processes are now part of the NIST Cybersecurity Framework, and the White House directed agencies to use the NIST Framework to manage their cyber risk. While more advanced CVD processes can certainly be useful for organizations prepared for the workload, gaining widespread adoption for foundational processes should be a higher policy priority than spotty adoption of more advanced CVD processes.

And now I must go. Wispy light is quietly filtering through the trees and corn stalks of this lonely corner of pumpkin patch. Search party or apparition, I do not know, but I am compelled to unite with it. Happy Hallow-Cyber-Awareness-Ween...

What is the essential information that manufacturers should communicate to consumers about security updates for Internet of Things (IoT) devices?

A working group of private sector volunteers produced a document to address this question with voluntary, actionable recommendations for IoT device manufacturers. The group was convened by the National Telecommunications and Information Administration (part of the US Dept. of Commerce) to study issues surrounding IoT security update capability, though the document is the product of the working group and not a government agency.

The document, entitled "Communicating IoT Device Security Update Capability to Improve Transparency for Consumers" identifies three key elements, and suggests three additional considerations as a template for providing consumers with critical information about update capability. The goal of the communication is to enable consumers to make more informed purchases based on IoT device security, thereby driving market forces to better reward more secure products.

Describe the anticipated timeline for the end of security update support.

B. Three additional considerations IoT device manufacturers may communicate to consumers before or after purchase are:

Describe how the user is notified about security updates.

Describe what happens when the device no longer receives security update support.

Describe how the manufacturer secures updates, or how the update process is reasonably secure.

Issue background - IoT security patches

The security of Internet of Things (IoT) devices is widely recognized as an increasingly important risk to manage as the devices proliferate, take on more computing power and connectivity, and become embedded in a wide variety of sensitive environments. Like virtually all computers and software, IoT devices inevitably carry security flaws, exposing the devices to the risk of breach or attack until those vulnerabilities are mitigated.

While there are many fundamental components of IoT security, such as secure design, providing security updates or patching is a key way to protect IoT devices when vulnerabilities are discovered after a device has entered the market. IoT devices that cannot be patched risk repeated exploitation of device vulnerabilities until the device is taken offline. Yet consumers often have little insight into whether a particular IoT device is capable of receiving security updates, and until what date the device is supported, making informed purchasing choices more difficult. The lack of transparency is especially acute for fungible products at lower price points.

The concept of an IoT security rating or "nutrition label" is routinely floated to enhance consumer transparency. For example, the excellent 2016 report from the US Commission on Enhancing the National Cybersecurity recommended a voluntary rating and label scheme for security. Some federal legislation, such as Sen. Markey's Cyber Shield Act, proposes an Energy Star-like program to provide consumers with IoT security information. The EU Agency for Network and Information Security is also considering a labeling system for IoT devices certified as secure. Finally, multiple independent organizations like Consumers Union and UL are developing standards and ratings to help consumers evaluate product cybersecurity. As a critical component of IoT security, update capability and lifecycle have a role in each of these frameworks.

NTIA multistakeholder process

In 2016, the National Telecommunications and Information Administration (NTIA) launched an open process to discuss IoT security updates. As with past "multistakeholder" processes, this work was facilitated by NTIA, but led by participants that included numerous technical and policy experts from private industry and civil society.

The participants divided up into working groups focused on particular issues related to security updates, including transparency. (This blog post focuses on transparency, but you can read more about the good work of the other working groups, and the status of their respective documents, here.) The transparency working group included dozens of participants from diverse backgrounds, and was co-chaired by Aaron Kleiner of Microsoft, Beau Woods of the Atlantic Council, and myself.

The full body of stakeholders convened by NTIA ultimately reached consensus in favor of the working group's final document - Communicating IoT Device Security Update Capability to Improve Transparency for Consumers. In drafting the document, the working group considered a broad range of inputs, including technical standards and guidance on IoT security from government agencies, nonprofits, and companies.

It's worth noting that the Federal Trade Commission commented on the working group document. The FTC's comments function as a standalone statement from the Commissioners (rather than a staff report) while providing additional input and background information to the working group document. The FTC comments are available here.

Communicating IoT Security Update Capability

Communicating IoT Device Security Update Capability is, believe it or not, tightly scoped to communicating IoT security update capability, not other aspects of IoT security and privacy that are important in their own right. The document does not recommend exact language device manufacturers must use, require specific methods of communicating, or dictate means of providing an update. And finally, the document takes pains to make clear that it is voluntary and not intended to describe a basis for regulation or a standard of care. The recommendations to device manufacturers are divided into two categories: key elements to communicate before purchase, and additional information that can be communicated before or after purchase.

Describe whether the device can receive security updates. This could be a simple yes/no statement, or a symbol.

Describe how the device receives updates. Are updates manual or automatic? What basic user actions are required for updates - an account, additional fees?

Describe the anticipated timeline for the end of security update support. This should state the minimum period consumers can expect security updates. A specific date is preferable. If support timeline is unknown or indefinite, this should be indicated.

B. Additional Elements - Manufacturers should consider communicating these elements to consumers before or after purchase.

Describe how the user is notified about security updates. How will the user know that an update is needed? This could be combined with element A.2.

Describe what happens when the device no longer receives security update support. Does the device lose functionality? Is there extended subscription or third party support available? Or does the user simply continue operating at the user's own risk?

Describe how the manufacturer secures updates, or how the process is reasonably secure. Manufacturers could describe how they verify source or test functionality of updates, or the manufacturer could simply reference a standard or solution.

Up for adoption

IoT security updates can be communicated in a variety of ways – such as a physical label on a box, or a product description on an online retailer's website – and the methods and capabilities to deliver updates can vary widely. Simplicity in communication should be a priority, to ensure understanding across a wide range of consumers. Even if a manufacturer declines to communicate all the items, just informing consumers before purchase whether a device can receive updates and for how long would be a welcome boost to transparency from the status quo.

We recognize that maintaining and communicating security update capability for IoT devices is not trivial, but requires expertise, resources, and persistence. We hope the Communicating IoT Device Security Update Capability template and other resources developed through the NTIA multistakeholder process will be broadly helpful to manufacturers seeking greater transparency for consumers and security for IoT devices.

Federal policymakers are ramping up consideration of privacy and breach notification legislation. This latest effort is driven by several converging events – the Equifax breach, concerns related to Cambridge Analytica's use of social media data, international requirements on security and privacy (such as GDPR), and the patchwork of state data security and breach notification laws.

A baseline requirement for commercial data security is frequently part of these discussions, but sometimes as an afterthought. This issue deserves close attention to ensure it is both effective at protecting users while flexible enough to be practicable.

This post sketches some of Rapid7's high level positions on commercial data security regulation. We would evaluate any regulatory proposal holistically and there are many possible approaches, so we are not committing to specific language here. We also recognize that federal commercial data security legislation is unlikely anytime soon, though states have been increasingly active in this area. Feedback on the principles below is welcome.

Support for comprehensive data security protection

As of June 2018, all 50 US states (as well as the District of Columbia) will have data breach notification laws. It is a testament to the enduring plague of breaches, and state legislatures' efforts to address it, that we have unlocked the dubious achievement of Patchwork Supreme. Increasingly, states are looking to directly require minimum security standards for personal information held by the private sector – at least 17 states (including, most recently, Alabama) now have such laws, and they can differ considerably by state and industry sector.

The fragmented landscape of state data security and breach notification laws does not serve consumers or businesses well. Consumers in different states are subject to uneven levels of protection, and the sheer complexity of the laws makes it more difficult for businesses of all sizes (which are now capable of being global businesses) to comply.

Rapid7 supports a unified and comprehensive data security standard for personal information that is clear and flexible enough for a wide variety of users and businesses to understand and implement. However, if current data security laws are preempted, a federal replacement should not establish substantially weaker protections than the status quo.

Separate from breach notification and privacy

While privacy and breach notification are both important issues, it is worth emphasizing that thoughtful security policy is distinct and irreplaceable. Data security requirements are often paired with privacy and breach notification legislation, both in states and at the federal level. However, it is also not uncommon for security to be absent or to receive less focus than breach notification or privacy requirements.

Too often, it seems breach notification regulations are relied on as a substitute for data security: since complying with breach notification requirements is expensive and difficult, organizations will be inspired to implement strong security safeguards to prevent breaches. (Consumer transparency also has more political support than a data security requirement.) Yet this calculus is too roundabout and only works to an extent, as demonstrated by the steady onward march of data breaches in spite of state breach notification laws and class action lawsuits. Notification requirements and common law causes of action (like negligence) only apply after a breach has occurred. Data security safeguards are critical to preventing breaches before they occur by addressing the root cause of many breaches – inadequate security.

Security is also widely accepted as a key component of a privacy protection framework. While privacy is not achieved through security alone, data security is critical to protect against risks to collected data that arise from unauthorized system behavior – such as malicious hacking and accidental data exposure. Privacy without security is entrusting your valuables to an unlocked vault.

Preserving flexibility with a risk management approach

A national data security requirement should remain effective over time for a variety of organizations without undue burden. One approach to achieving this flexibility is to require "reasonable" technical, physical, and administrative safeguards that are appropriate to the nature of the organization and the sensitivity of information it maintains. The reasonable safeguards should be aimed at controlling risks the organization identifies through a risk assessment, but legislation should not be over-prescriptive in what components must be in a security plan.

A benefit of this risk management approach is that not all data would need to receive the same level of protection, and the same expectation is not necessarily held for a small business as for a large global enterprise. Organizations can apply strict safeguards to especially sensitive data, and more basic safeguards to less sensitive data, as proportionate to the risks – but some protection would be in place for all data covered by the law.

Protecting against more than economic harms

There is a fierce point of contention over whether to limit any data security requirements to protection against economic or financial harm. This flashpoint usually manifests in two ways: 1) requiring security safeguards to only protect against economic or physical harm, and 2) restricting the scope of information covered by the law to information that can directly cause economic harm, such as protecting username/password for financial accounts but not credentials for other online accounts.

Rapid7 believes data security protections should not be limited to those that directly relate to economic or physical harm. The limitation would not align with user expectations or the wide array of threats organizations face today. Nonfinancial credentials are significant targets, can safeguard information sensitive to the user, and are often re-used across both nonfinancial and financial accounts. Modernized cybersecurity standards should reflect that credentials for online accounts need some level of reasonable protection against unauthorized access, even if the credentials are not required to complete purchases.

Limiting safeguards to economic harm would also be a step back from existing protections in many states. Numerous states require the private sector to safeguard personal information without limiting these safeguards to protection against risks of economic harm (see AL, AK, CA, CT, FL, IN, KS, MA, MN, NM, NV, OR, RI, TX, UT). Several other states require the private sector to protect credentials for online accounts, without limiting protection to credentials necessary for purchases (see AL, CA, FL, MD, MN, NV).

Cybersecurity is a major national priority and any broad, preemptive data security regulation should not significantly undermine current standards. However, the flexible approach we support – outlined above – may apply more stringent security for information that can directly lead to economic harm, and less for other types of personal information.

A note on names

Many state laws and federal legislative proposals define personal or covered information as always requiring an individual's actual name – first and last name or first initial and last name. Under these definitions, no data security requirement would apply to usernames/passwords, ID numbers, personal media, biometrics or medical information (not covered by HIPAA/HITECH), etc., unless the user's actual name were also included. (CA, FL, MD, and MN appear to be exceptions here, though some still attach the actual name requirement to some sensitive data categories.)

We believe this name requirement is anachronistic and should be dropped or greatly limited. A breached username/password, biometric authenticator, or even a photo – with the growing sophistication and availability of facial recognition and search – can easily yield a user's actual name. Moreover, the pool of breached data has mushroomed over the years. Frequent redistribution and aggregation of breached data has made it easier to combine data elements from multiple breaches and open sources, providing another means for matching a user's actual name to other data.

Encouraging crypto - and key security

Many state laws and federal legislative proposals rightfully encourage encryption or hashing of user information by exempting unreadable and unusable information from data breach notification laws. This has become pretty standard. However, this exemption should not apply if the encryption keys are also breached, since the keys can render the information readable and usable once more. (AL and MA call this out in their laws.)

Keep the track record in mind, but don’t be paralyzed by it

Privacy and security legislation is introduced in Congress every few months, and there is a concerted push for federal privacy or data security legislation every couple of years. Will it be different this time? Will this effort be successful?

Don't count on it. The landscape is littered with well-intentioned and unsuccessful efforts. Congress has considered broad data privacy and security legislation for more than a decade, often with greater or lesser fervor depending on the recency of high profile events. For example, the Personal Data Privacy and Security Act was introduced in 2005, and the bill was revived in 2007, 2009, 2011, and 2014. The Obama Administration's 2012 Privacy Bill of Rights and the Kerry/McCain 2011 Commercial Privacy Bill of Rights generated much discussion, but ultimately floundered. Lots of data privacy and security bills have a similar history. After many big breaches and thoughtful rhetoric about the balance between security and innovation, little has changed at the federal level, and a lot of policy and industry figures are understandably dismissive about any prospects in the near term.

So it is with caution that we approach the subject of federal baseline data security legislation. But let us be neither naive nor cynical. It is prudent to think through positions on security regulation even if imminent action is not guaranteed. The same concerns that drove policymakers to consider data security proposals circa 2005 are still firmly with us, state legislatures are not waiting on Congress to act, and many people jaded by federal inertia on data security regulation also privately whisper "It is inevitable."

]]>

[Update 05/09/18: Georgia Governor Deal vetoed SB 315. In a thoughtful veto statement, the Governor noted that the legislation raised "concerns regarding national security implications and other potential ramifications," and that "SB 315 may inadvertently hinder the ability of government and private industries" to

[Update 05/09/18: Georgia Governor Deal vetoed SB 315. In a thoughtful veto statement, the Governor noted that the legislation raised "concerns regarding national security implications and other potential ramifications," and that "SB 315 may inadvertently hinder the ability of government and private industries" to protect against breaches. The statement expressed interest in working with the cybersecurity and law enforcement communities on a new policy.]

The Georgia state legislature recently passed a bill - SB 315 - to create a new crime of accessing a computer without authorization. This will become law unless Governor Nathan Deal vetoes the bill by May 8th. Prior to SB 315, Georgia did not have a specific crime for accessing computers without authorization unless you damaged, modified, or took something. SB 315 does not require these elements, just accessing a computer with knowledge that the access is not authorized.

The new crime created by SB 315 would have an exception for "active defense." This is Rapid7's foremost concern with the bill, and we urge a veto or clarifying legislation on these grounds. At minimum, to avoid undermining cybersecurity, we recommend carefully defining the term "active defense" and the boundaries of acceptable behavior the provision would allow.

There are a couple positive elements in the bill. The new crime would have explicit exceptions for terms of service violations and "legitimate business activity," which should protect some (but not all) forms of security research and prevent this law from being used to enforce contracts and user agreements. These particular exceptions are not perfect but, in our view, are improvements on the Computer Fraud and Abuse Act (CFAA), which has long drawn criticism for overbreadth and use to litigate against terms of service violations. However, we view the "active defense" provision in SB 315 as potentially dangerous.

"Active defense" can mean several things, but has become a loaded term for retaliatory "hacking back" in federal policymaking, in part due to the Active Cyber Defense Certainty (ACDC) Act, sponsored by Rep. Tom Graves - also of Georgia. The ACDC Act was repeatedly referenced favorably as state legislators considered SB. 315, though the ACDC Act is a a long way from passing Congress and becoming federal law.

Rapid7 has opposed hack back as dangerous for cybersecurity, and accordingly we oppose the ACDC Act. (We provided our feedback to Graves' staff, and they were cordial and receptive.) But at least the ACDC Act defines "active cyber defense measure," limits its use to defending attacks on one's own network, and would install some government oversight over the practice. SB 315 lacks any of these modest safeguards.

Instead, SB 315's active defense provision seems to explicitly authorize an individual or organization to access another person's computer, knowing that the access is unauthorized, for the purpose of preemptively preventing the other person from unauthorized access. (Note, the wording seems to allow an active defender to prevent unauthorized access to anyone else, so the use of active defense measures are not limited to defending one’s own network.) What sort of scenarios could this encompass? Here is a hypothetical: Remotely breaking into and searching another person's computers to see if that person possesses stolen passwords that could potentially be used for unauthorized access.

Are there some legitimately beneficial activities that could be covered by this active defense provision? Yes, it's broad enough to cover many scenarios, good and bad. During the GA House hearing, some legislators indicated that the active defense provision could actually protect independent security research – which is typically not how "active defense" is conceived and further indicates legislators' intent to construe the term broadly. The goal of protecting researchers could be accomplished with something much more clearly defined. The sponsors of SB 315 worried (not unreasonably) that a broad security researcher carve-out could be abused by bad actors, but this vague active defense provision actually seems to run a greater risk.

Obviously hacking back off your own network remains illegal at the federal level under CFAA, so federal law enforcement could still prosecute such behavior, though SB 315 would prevent Georgia from using its resources to do so. In addition, any hacking back authorized under SB 315 would be limited to unauthorized access only – activity that involved damage or theft would still be illegal under Georgia's other computer crime laws. It is also true that, since this is an exception to the creation of a new crime, Georgia state law may not have prevented hacking back prior to SB 315, and a veto of SB 315 would maintain the status quo.

Nonetheless, SB 315 would affirmatively authorize hack back behavior in Georgia, becoming the first state to do so, with the side effect of giving a normalizing boost to the federal legislation. We think ultimately this precedent would be bad for cybersecurity and risks harming innocent users, and that – at minimum – the legislation should be rejected until "active defense" is defined much more clearly and narrowly.

Good news for security researchers: A key guideline for cybersecurity risk management now includes vulnerability disclosure and handling processes. The National Institute of Standards and Technology (NIST) provisionally added this as a core practice in the next version of the NIST Cybersecurity Framework. Rapid7 worked with the security community to help drive this revision, and we are excited about its inclusion - there is a greater likelihood of a positive outcome for cybersecurity when organizations are prepared to receive vulnerability disclosures from external sources, such as researchers. We intend to file additional comments to NIST in support of this change.

The revision

The latest NIST Cybersecurity Framework revision includes a new core element:

RS.AN-5: Processes are established to receive, analyze and respond to vulnerabilities disclosed to the organization from internal and external sources (e.g. internal testing, security bulletins, or security researchers)

This revision directs organizations using the NIST Framework to consider vuln handling processes when developing a cybersecurity risk management program. The inclusion of vuln handling processes in the NIST Cybersecurity Framework will help raise awareness and adoption of the practice, particularly with critical infrastructure providers and government agencies (many of whom are required to use the Framework for risk management planning).

Rapid7 and community involvement

Rapid7 has long championed coordinated vulnerability disclosure and handling processes as a key component of cybersecurity programs and as a protection for researchers. When NIST solicited feedback on its Cybersecurity Framework, we drafted comments arguing at length for inclusion of vulnerability disclosure and handling processes, and gathered signatures from more than two dozen companies, organizations, and individual researchers in support of it. [See our comments here.] We also engaged the Coalition for Cybersecurity Policy & Law, a policy advocacy group for cybersecurity companies (including us) run by Venable, which agreed to reiterate and cite to our recommendations in their own comments [pgs. 3-4]. Then Rapid7 helped lead a breakout session on the topic during the NIST Framework workshop.

Thankfully, NIST was open to this feedback and took the subject seriously. The new revision to the NIST Cybersecurity Framework closely mirrors our primary recommendation. [Compare the Framework at the bottom of pg. 49 with our comments at the top of pg. 3.]

We’re very glad that NIST agreed to incorporate vulnerability handling processes, and believe the revision will prove useful for organizations implementing the Framework, as well as security researchers. We are grateful for NIST's commitment to considering public input and for driving this issue. We also deeply appreciate the support we received from other government officials, the Venable Coalition, and the signatories to our comments. Special thanks to NTIA, Luta Security, I Am The Cavalry, and McAfee.

What the revision is and ain't

When evaluating what this change means, there are a couple things to keep in mind.

First, the NIST Cybersecurity Framework is voluntary for private sector companies – though it is mandatory for federal government agencies, as well as some state agencies. The Framework is also intended to be a flexible planning tool, not a checklist, so each organization may implement it differently. The point is, don't assume all organizations will have fully matured or consistent processes for receiving vulnerability information from researchers. But more will, and the Framework revision helps pave the way to working with organizations to incorporate effective coordinated disclosure processes.

Second, this revision is not necessarily a safe harbor or a bug bounty. As we emphasized in our comments [fn. 2] – there is no requirement that organizations who have vulnerability handling processes provide liability protection for researchers who disclose bugs. If there were, it would be much more difficult to get consensus on including the practice in the Framework. We would support a legal safe harbor, but organizations will have to decide whether it is right for them. However, even without the formal safe harbor, vulnerability handling processes can benefit researchers by providing them with an established channel for disclosure and a feedback loop. When organizations are prepared to receive and analyze vulnerabilities from external sources, there is less chance of conflict due to misunderstood intentions, ignored vulnerability disclosures, or lawyers freaking out (confession: I am a lawyer and I freak out all the time).

Next step: More comments!

This latest revision of the NIST Framework is technically still in draft form. The fact that vuln handling processes is in the draft is a good sign that it will stay, but that is not set in stone just yet. NIST is accepting public comments on all its revisions until Jan. 19th, 2018, and so we will file group comments in support of the revision. We will also likely use this next round of comments to again recommend that the Framework reference ISO/IEC 30111:2013 and ISO/IEC 29147:2014. Its current references for vuln handling processes are not as on point.

That's how a lot of policy goes: Iterative changes, heavy on process, and rarely is any issue settled forever & ever. That makes sustained engagement important, and it also makes the progress achieved in this new revision worth celebrating.

The White House recently released new details on the process the US government uses for disclosing zero-day vulnerabilities to vendors, or withholding disclosure for law enforcement or national security operations—called the Vulnerabilities Equities Process (VEP). The new charter for the VEP is available here.

The Administration's cybersecurity team deserves credit for significantly boosting the transparency of the VEP. Until now, few details were publicly confirmed about the VEP, and official sources had only discussed of the VEP in generalities. By comparison, the new VEP charter is quite thorough and provides answers to several key questions, though it remains to be seen how it will operate in practice.

Rapid7 considers it important for the government to have a mature and effective VEP in place because private actors need to know about their vulnerabilities in order to make systems more secure. We recognize that the government has legitimate reasons to identify and exploit cybersecurity vulnerabilities, but the exercise of that power must be balanced with (among other things) the risks of failure of the systems on which we all rely. Greater transparency on the government's process for vulnerability disclosures is helpful to ensure—but not a guarantee—that the process strikes this balance appropriately.

Key details in the VEP charter include:

The charter makes clear that disclosure is the default and stockpiling is not the US government's policy. The charter notes the primary focus of the process is to protect cybersecurity, and that disclosure is in the national interest in the "vast majority" of cases (absent demonstrable, overriding national security or law enforcement interests). [Pg. 1.]

The VEP includes significant consideration non-government interests. The charter reveals a non-exclusive list of considerations to weigh in deciding whether to disclose vulnerabilities. Many of these considerations focus on the interests of civilians, the private sector, and cybersecurity generally, not just law enforcement and national security. There is also some responsibility levied here on industry, as the considerations include the likelihood that vendors would create patches and that system operators would apply them. [Pgs. 13-14]

Decisions against disclosing vulnerabilities to vendors are reviewed at least annually, and immediately in cases of malicious exploitation. [Pg. 8]

As to be expected, the Charter includes areas of ambiguity. For example, the charter includes a broad exception to the VEP for vulnerabilities obtained from partners (such as other nations and commercial arrangements) and those used in "sensitive operations"—a term that is undefined in the charter. [Pg. 9] According to media reports, this exception for sensitive operations was highly controversial within the Administration. In addition, the VEP empowers the US Cybersecurity Coordinator to choose whether and how to report the quantity of vulnerabilities that the government does not disclose. [Pg. 5] Yet the quantity will help indicate whether vulnerability stockpiling is indeed occurring. Obviously it would be unreasonable to expect publication of precise numbers, but the oversight is nevertheless fuzzy here. We will just have to see how these issues play out as the VEP is implemented over time.

We expect the VEP to grow in importance as exploitation of vulnerabilities becomes more routine in law enforcement, intelligence, and military activities. It's also worth noting—as Kate Chartlet, Sasha Romanosky, and Bert Thomson do—that this issue is certainly not confined to the United States. Nations around the world are increasingly likely to identify and use zero-day vulnerabilities. It would be helpful if other countries developed, coordinated, and published their own vulnerability disclosure processes. The articulation of the US government's VEP is a very positive step, but not the final one.

]]>

When the North American Free Trade Agreement (NAFTA) was originally negotiated, cybersecurity was not a central focus. NAFTA came into force – removing obstacles to commercial trade activity between the US, Canada, and Mexico – in 1994, well before most digital services existed. Today, cybersecurity is a major economic force – itself a

When the North American Free Trade Agreement (NAFTA) was originally negotiated, cybersecurity was not a central focus. NAFTA came into force – removing obstacles to commercial trade activity between the US, Canada, and Mexico – in 1994, well before most digital services existed. Today, cybersecurity is a major economic force – itself a large industry and important source of jobs, as well as an enabler of broader economic health by reducing risk and uncertainty for businesses. Going forward, cybersecurity should be an established component of modernized trade agreements and global trade policy.

The Trump Administration is now modernizing NAFTA, with the first renegotiation round concluding recently. There are several key ways the US, Mexican, and Canadian governments can use this opportunity to advance cybersecurity. In this blog post, we briefly describe two of them: 1) Aligning cybersecurity frameworks, and 2) protecting strong encryption.

For more about Rapid7's recommendations on cybersecurity and trade, check out our comments on NAFTA to the US Trade Representative (USTR), or check out my upcoming presentation on this very subject at Rapid7's UNITED conference!

Align cybersecurity frameworks

Trade agreements should broadly align approaches to cybersecurity planning by requiring the parties to encourage voluntary use of a comprehensive, standards-based cybersecurity risk management framework. The National Institute of Standards and Technology's (NIST) Cybersecurity Framework for Critical Infrastructure ("NIST Cybersecurity Framework") is a model of this type of framework, and is already experiencing strong adoption in the U.S. and elsewhere.

In addition to our individual comments to USTR, Rapid7 joined comments from the Coalition for Cybersecurity Policy and Law, and also organized a joint letter with ten other cybersecurity companies, urging USTR to incorporate this recommendation into NAFTA.

International alignment of risk management frameworks would promote trade and cybersecurity by

Streamlining trade of cybersecurity products and services. To oversimplify, think of a cybersecurity framework like a list of goals and activities – it is easier to find the right products and services if everyone is referencing a similar list. Alignment on a comprehensive framework would enable cybersecurity companies to map their products and services to the framework more consistently. Alignment can also help less mature markets know what specific cybersecurity goals to work toward, which will clarify the types of products they need to achieve these goals, leading to more informed investment decisions that hold service providers to consistent benchmarks.

Enabling many business sectors by strengthening cybersecurity. Manufacturing, agriculture, healthcare, and virtually all other industries are going digital, making computer security crucial for their daily operations and future success. Broader use of a comprehensive risk management framework can raise the baseline cybersecurity level of trading partners in all sectors, mitigating cyber threats that hinder commercial activity, fostering greater trust in services that depend upon secure infrastructure, and strengthening the system of international trade.

Helping address trade barriers and market access issues. Country-specific approaches to cyber regulation – such as data localization or requiring use of specific technologies – can raise market access issues or force ICT companies to make multiple versions of the same product. International alignment on interoperable, standards-based cybersecurity principles and processes would reduce unnecessary variation in regulatory approaches and help provide clear alternatives to cybersecurity policies that inhibit free trade.

To keep pace with innovation and evolving threats, prevent standards from reducing market access, and incorporate the input of private sector experts, the risk management framework should be voluntary, flexible, and developed in an industry-led and transparent process. For example, the NIST Cybersecurity Framework is voluntary and was developed through an open process in which anyone can participate. The final trade agreement text need not dictate the framework content beyond basic principles, but should instead encourage the development, alignment, and use of functionally similar cybersecurity frameworks.

Prohibit requirements to weaken encryption

Critical infrastructure, commerce, and individuals depend on encryption as a fundamental means of protecting data from unauthorized access and use. Market access rules requiring weakened encryption would create technical barriers to trade and put products with weakened encryption at a competitive advantage with uncompromised products. Requirements to weaken encryption can impose significant security risks on companies by creating diverse new attack surfaces for bad actors, including cybercriminals and unfriendly international governments – ultimately undermining the security of the end-users, businesses, and governments.

NAFTA should include provisions forbidding parties from conditioning market access for cryptography used for commercial applications on the transfer of private keys, algorithm specification, or other design details. The final draft text of the Trans-Pacific Partnership (TPP) contained a similar provision – though Congress never ratified TPP, so it never came into force.

Although this provision would be helpful to protect strong encryption, it would only apply to commercial activities. The current version of NAFTA contains exceptions for regulations undertaken for national security (as did TPP, in addition to clarifications that a nation's law enforcement agencies could still demand information pursuant to their legal processes). This may limit the overall protectiveness of the provision, but should also moderate concerns a nation might have about including encryption protection in the trade agreement.

This is beginning

The NAFTA parties have set an aggressive pace for negotiations, with the goal of agreeing on a final draft by the end of the year. However, the original agreement took years to finalize, and NAFTA covers many subjects that can attract political controversy. So NAFTA's timeline, and openness to incorporating new cybersecurity provisions, are not entirely clear.

Nonetheless, the Trump Administration has indicated that both international trade and cybersecurity are priorities. Even as the NAFTA negotiations roll on, the Administration has begun examining the Korea-US trade agreement, and both new agreements and modernization of previous agreements are likely future opportunities.

Trade agreements can last decades, so considering how best to embed cybersecurity priorities should not be taken lightly. Rapid7 will continue to work with private and public sector partners to strengthen cybersecurity and industry growth through trade agreements.

]]>

On Jun. 22, the US Copyright Office released its long-awaited study on Sec. 1201 of the Digital Millennium Copyright Act (DMCA), and it has important implications for independent cybersecurity researchers. Mostly the news is very positive. Rapid7 advocated extensively for researcher protections to be built into this report, submitting two

On Jun. 22, the US Copyright Office released its long-awaited study on Sec. 1201 of the Digital Millennium Copyright Act (DMCA), and it has important implications for independent cybersecurity researchers. Mostly the news is very positive. Rapid7 advocated extensively for researcher protections to be built into this report, submitting two sets of detailed comments—see here and here—to the Copyright Office with Bugcrowd, HackerOne, and Luta Security, as well as participating in official roundtable discussions. Here we break down why this matters for researchers, what the Copyright Office's study concluded, and how it matches up to Rapid7's recommendations.

What is DMCA Sec. 1201 and why does it matter to researchers?

Sec. 1201 of the DMCA prohibits circumventing technological protection measures (TPMs, like encryption, authentication requirements, region coding, user agents) to access copyrighted works, including software, without permission of the owner. That creates criminal penalties and civil liability for independent security research that does not obtain authorization for each TPM circumvention from the copyright holders of software. This hampers researchers' independence and flexibility. While the Computer Fraud and Abuse Act (CFAA) is more famous and feared by researchers, liability for DMCA Sec. 1201 is arguably broader because it applies to accessing software on devices you may own yourself while CFAA generally applies to accessing computers owned by other people.

To temper this broad legal restraint on unlocking copyrighted works, Congress built in two types of exemptions to Sec. 1201: permanent exemptions for specific activities, and temporary exemptions that the Copyright Office can grant every three years in its "triennial rulemaking" process. The permanent exception to the prohibition on circumventing TPMs for security testing is quite limited – in part because researchers are still required to get prior permission from the software owner. The temporary exemptions go beyond the permanent exemptions.

In Oct. 2015 the Copyright Office granted a very helpful exemption to Sec. 1201 for good faith security testing that circumvents TPMs without permission. However, this temporary exemption will expire at the end of the three-year exemption window. In the past, once a temporary exemption expires, advocates must start from scratch in re-applying for another temporary exemption. The temporary exemption is set to expire Oct. 2018, and the renewal process will ramp up in the fall of this year.

Copyright Office study and Rapid7's recommendations

The Copyright Office announced a public study of Sec. 1201 in Dec. 2015. The Copyright Office undertook this public study to weigh legislative and procedural reforms to Sec. 1201, including the permanent exemptions and the three-year rulemaking process. The Copyright Office solicited two sets of public comments and held a roundtable discussion to obtain feedback and recommendations for the study. At each stage, Rapid7 provided recommendations on reforms to empower good faith security researchers while preserving copyright protection against infringement – though, it should be noted, there were several commenters opposed to reforms for researchers on IP protection grounds.

Broadly speaking, the conclusions reached in the Copyright Office's study are quite positive for researchers and largely tracked the recommendations of Rapid7 and other proponents of security research. Here are four key highlights:

Authorization requirement: As noted above, the permanent exemption for security testing under Sec. 1201(j) is limited because it still requires researchers to obtain authorization to circumvent TPMs. Rapid7's recommendation is to remove this requirement entirely because good faith security research does not infringe copyright, yet an authorization requirement compromises independence and speed of research. The Copyright Office's study recommended [at pg. 76] that Congress make this requirement more flexible or remove it entirely. This is arguably the study's most important recommendation for researchers.

Multi-factor test: The permanent exemption for security testing under Sec. 1201(j) also partially conditions liability protection on researchers when the results are used "solely" to promote the security of the computer owner, and when the results are not used in a manner that violates copyright or any other law. Rapid7's recommendations are to remove "solely" (since research can be performed for the security of users or the public at large, not just the computer owner), and not to penalize researchers if their research results are used by unaffiliated third parties to infringe copyright or violate laws. The Copyright Office's study recommended [at pg. 79] that Congress remove the "solely" language, and either clarify or remove the provision penalizing researchers when research results are used by third parties to violate laws or infringe copyright.

Compliance with all other laws: The permanent exemption for security testing only applies if the research does not violate any other law. Rapid7's recommendation is to remove this caveat, since research may implicate obscure or wholly unrelated federal/state/local regulations, those other laws have their own enforcement mechanisms to pursue violators, and removing liability protection under Sec. 1201 would only have the effect of compounding the penalties. Unfortunately, the Copyright Office took a different approach, tersely noting [at pg. 80] that it is unclear whether the requirement to comply with all other laws impedes legitimate security research. The Copyright Office stated they welcome further discussion during the next triennial rulemaking, and Rapid7 may revisit this issue then.

Streamlined renewal for temporary exemptions: As noted above, temporary exemptions expire after three years. In the past, proponents must start from scratch to renew the temporary exemption – a process that involves structured petitions, multiple rounds of comments to the Copyright Office, and countering the arguments of opponents to the exemption. For researchers that want to renew the temporary security testing exemption, but that lack resources and regulatory expertise, this is a burdensome process. Rapid7's recommendation is for the Copyright Office to presume renewal of previously granted temporary exemptions unless there is a material change in circumstances that no longer justifies granting the exemption. In its study, the Copyright Office committed [at pg. 143] to streamlining the paperwork required to renew already granted temporary exemptions. Specifically, the Copyright Office will ask parties requesting renewal to submit a short declaration of the continuing need for an exemption, and whether there has been any material change in circumstances voiding the need for the exemption, and then the Copyright Office will consider renewal based on the evidentiary record and comments from the rulemaking in which the temporary exemption was originally granted. Opponents of renewing exemptions, however, must start from scratch in submitting evidence that a temporary exemption harms the exercise of copyright.

Conclusion—what's next?

In the world of policy, change typically occurs over time in small (often hard-won) increments before becoming enshrined in law. The Copyright Office's study is one such increment. For the most part, the study is making recommendations to Congress, and it will ultimately be up to Congress (which has its own politics, processes, and advocacy opportunities) to adopt or decline these recommendations. The Copyright Office's study comes at a time that House Judiciary Committee is broadly reviewing copyright law with an eye towards possible updates. However, copyright is a complex and far-reaching field, and it is unclear when Congress will actually take action. Nonetheless, the Copyright Office's opinion on these issues will carry significant weight in Congress' deliberations, so it would have been a heavy blow if the Copyright Office's study had instead called for tighter restrictions on security research.

Importantly, the Copyright Office's new, streamlined process for renewing already granted temporary exemptions will take effect without Congress' intervention. The streamlined process will be in place for the next "triennial rulemaking," which begins in late 2017 and concludes in 2018, and which will consider whether to renew the temporary exemption for security research. This is a positive, concrete development that will reduce the administrative burden of applying for renewal and increase the likelihood of continued protections for researchers.

The Copyright Office's study noted that "Independent security test[ing] appears to be an important component of current cybersecurity practices". This recognition and subsequent policy shifts on the part of the Copyright Office are very encouraging. Rapid7 believes that removing legal barriers to good faith independent research will ultimately strengthen cybersecurity and innovation, and we hope to soon see legislative reforms that better balance copyright protection with legitimate security testing.

]]>

Senator Ed Markey (D-MA) is poised to introduce legislation to develop a voluntary cybersecurity standards program for the Internet of Things (IoT). The legislation, called the Cyber Shield Act, would enable IoT products that comply with the standards to display a label indicating a strong level of security to consumers

Senator Ed Markey (D-MA) is poised to introduce legislation to develop a voluntary cybersecurity standards program for the Internet of Things (IoT). The legislation, called the Cyber Shield Act, would enable IoT products that comply with the standards to display a label indicating a strong level of security to consumers – like an Energy Star rating for IoT. Rapid7 supports this legislation and believes greater transparency in the marketplace will enhance cybersecurity and protect consumers.

The burgeoning IoT marketplace holds a great deal of promise for consumers. But as the Mirai botnet made starkly clear, many IoT devices – especially at the inexpensive range of the market – are as secure as leaky boats. Rapid7's Deral Heiland and Tod Beardsley, among many others, have written extensively about IoT's security problems from a technical perspective.

Policymakers have recognized the issue as well and are taking action. Numerous federal agencies (such as FDA and NHTSA) have set forth guidance on IoT security as it relates to their area of authority, and others (such as NIST) have long pushed for consistent company adoption of baseline security frameworks. In addition to these important efforts, we are encouraged that Congress is also actively exploring market-based means to bring information about the security of IoT products to the attention of consumers.

Sen. Markey's Cyber Shield Act would require the Dept. of Commerce to convene public and private sector experts to establish security benchmarks for select connected products. The working group would be encouraged to incorporate existing standards rather than create new ones, and the benchmark would change over time to keep pace with evolving threats and expectations. The process, like that which produced the NIST Cybersecurity Framework, would be open for public review and comment. Manufacturers may voluntarily display "Cyber Shield" labels to IoT products that meet the security benchmarks (as certified by an accredited testing entity).

Rapid7 supports voluntary initiatives to raise awareness to consumers about product security, like that proposed in the Cyber Shield Act. Consumers are often not able to easily determine the level of security in products they purchase, so an accessible and reliable system is needed to help inform purchase decisions. As consumers evaluate and value IoT security more, competing manufacturers will respond by prioritizing secure design, risk management practices, and security processes. Consumers and the IoT industry can both benefit from this approach.

The legislation is not without its challenges, of course. To be effective, the security benchmarks envisioned by the bill must be clear and focused, rather than generally applicable to all connected devices. The program would need buy-in from security experts and responsible manufacturers, and consumers would need to learn how to spot and interpret Cyber Shield labels. But overall, Rapid7 believes Sen. Markey's Cyber Shield legislation could encourage greater transparency and security for IoT. Strengthening the IoT ecosystem will require a multi-pronged approach from policymakers, and we think lawmakers should consider incorporating this concept as part of the plan.

]]>

In April 2017, President Trump issued an executive order directing a review of all trade agreements. This process is now underway: The United States Trade Representative (USTR) – the nation's lead trade agreement negotiator – formally requested public input on objectives for the renegotiation of the North American Free Trade Agreement (NAFTA)

In April 2017, President Trump issued an executive order directing a review of all trade agreements. This process is now underway: The United States Trade Representative (USTR) – the nation's lead trade agreement negotiator – formally requested public input on objectives for the renegotiation of the North American Free Trade Agreement (NAFTA). NAFTA is a trade agreement between the US, Canada, and Mexico, that covers a huge range of topics, from agriculture to healthcare.

Digital goods and services are increasingly critical to the US economy. By leveraging cloud computing, digital commerce offers significant opportunities to scale globally for individuals and companies of all sizes – not just large companies or tech companies, but for any transnational company that stores customer data.

However, regulations abroad that disrupt the free flow of information, such as "data localization" (requirements that data be stored in a particular jurisdiction), impede both trade and innovation. Data localization erodes the capabilities and cost savings that cloud computing can provide, while adding the significant costs and technical burdens of segregating data collected from particular countries, maintaining servers locally in those countries, and navigating complex geography-based laws. The resulting fragmentation also undermines the fundamental concept of a unified and open global internet.

Rapid7's comments [pages 2-3] recommended that NAFTA should 1) Prevent compulsory localization of data, and 2) Include an express presumption that governments would minimize disruptions to the flow of commercial data across borders.

When NAFTA was originally negotiated, cybersecurity was not the central concern that it is today. Cybersecurity is presently a global affair, and the consequences of malicious cyberattack or accidental breach are not constrained by national borders.

Flexible, comprehensive security standards are important for organizations seeking to protect their systems and data. International interoperability and alignment of cybersecurity practices would benefit companies by enabling them to better assess global risks, make more informed decisions about security, hold international partners and service providers to a consistent standard, and ultimately better protect global customers and constituents. Stronger security abroad will also help limit the spread of malware contagion to the US.

We support the approach taken by the National Institute of Standards and Technology (NIST) in developing the Cybersecurity Framework for Critical Infrastructure. The process was open, transparent, and carefully considered the input of experts from the public and private sector. The NIST Cybersecurity Framework is now seeing impressive adoption among a wide range of organizations, companies, and government agencies – including some critical infrastructure operators in Canada and Mexico.

Rapid's comments [pages 3-4] recommended that NAFTA should 1) recognize the importance of international alignment of cybersecurity standards, and 2) require the Parties to develop a flexible, comprehensive cybersecurity risk management framework through a transparent and open process.

3) Protect strong encryption

Reducing opportunities for attackers and identifying security vulnerabilities are core to cybersecurity. The use of encryption and security testing are key practices in accomplishing these tasks. International regulations that require weakening of encryption or prevent independent security testing ultimately undermine cybersecurity.

Encryption is a fundamental means of protecting data from unauthorized access or use, and Rapid7 believes companies and innovators should be able to use the encryption protocols that best protect their customers and fit their service model – whether that protocol is end-to-end encryption or some other system. Market access rules requiring weakened encryption would create technical barriers to trade and put products with weakened encryption at a competitive disadvantage with uncompromised products. Requirements to weaken encryption would impose significant security risks on US companies by creating diverse new attack surfaces for bad actors, including cybercriminals and unfriendly international governments.

Rapid7's comments [page 5] recommended that NAFTA forbid Parties from conditioning market access for cryptography in commercial applications on the transfer of decryption keys or alteration of the encryption design specifications.

4) Protect independent security research

Good faith security researchers access software and computers to identify and assess security vulnerabilities. To perform security testing effectively, researchers often need to circumvent technological protection measures (TPMs) – such as encryption, login requirements, region coding, user agents, etc. – controlling access to software (a copyrighted work). However, this activity can be chilled by Sec. 1201 of the Digital Millennium Copyright Act (DMCA) of 1998, which forbids circumvention of TPMs without the authorization of the copyright holder.

Good faith security researchers do not seek to infringe copyright, or to interfere with a rightsholder's normal exploitation of protected works. The US Copyright Office recently affirmed that security research is fair use and granted this activity, through its triennial rulemaking process, a temporary exemption from the DMCA's requirement to obtain authorization from the rightsholder before circumventing a TPM to safely conduct security testing on lawfully acquired (i.e., not stolen or "borrowed") consumer products.

Some previous trade agreements have closely mirrored the Digital Millennium Copyright Act's (DMCA) prohibitions on unauthorized circumvention of TPMs in copyrighted works. This approach replicates internationally the overbroad restrictions on independent security testing that the US is now scaling back. Newly negotiated trade agreements should aim to strike a more modern and evenhanded balance between copyright protection and good faith cybersecurity research.

Better trade agreements for the Digital Age?

Data storage and cybersecurity have undergone considerable evolution since NAFTA was negotiated more than a quarter century ago. To the extent that renegotiation may better address trade issues related to digital goods and services, we view the modernization of NAFTA and other agreements as potentially positive. The comments Rapid7 submitted regarding NAFTA will likely apply to other international trade agreements as they come up for renegotiation. We hope the renegotiations result in a broadly equitable and beneficial trade regime that reflects the new realities of the digital economy.

The Executive Order (EO) appears broadly positive and well thought out, though it is just the beginning of a long process and not a sea change in itself. The EO directs agencies to come up with plans for securing and modernizing their networks; develop international cyber norms; work out a deterrence strategy against hacking; and reduce the threat of botnets – all constructive, overdue goals. Below are an overview, a highlight reel, and some additional takeaway thoughts.

Cybersecurity Executive Order Overview

Executive orders are issued only by the President, direct the conduct of executive branch agencies (not the private sector, legislature, or judiciary), and have the force of law. All public (not classified) EOs are published here. This cyber EO is the first major move the Trump White House (as distinct from other federal agencies) has made publicly on cybersecurity.

Protecting critical infrastructure. Directs agencies to work with the private sector to protect critical infrastructure, incentivize more transparency on critical infrastructure cybersecurity, improve resiliency of communication infrastructure, and reduce the threat of botnets.

National preparedness and workforce development. Directs agencies to assess strategic options for deterring and defending against adversaries. Directs agencies to report their international cybersecurity priorities, and to promote international norms on cybersecurity.

Cybersecurity Executive Order Highlights

Federal network cybersecurity:

The US will now manage cyber risks as an executive branch enterprise. The President is holding cabinet and agency heads accountable for implementing risk management measures commensurate with risks and magnitude of harm. [Sec. 1(c)(i)]

Agencies are directed to immediately use the NIST Framework for Improving Critical Infrastructure Cybersecurity (NIST Framework) to manage their cyber risk. [Sec. 1(c)(ii)]

DHS and OMB must report to the President, within 120 days, a plan to secure federal networks, address budgetary needs for cybersecurity, and reconcile all policies and standards with the NIST Framework. [Sec. 1(c)(iv)]

Agencies must now show preference for shared IT services (including cloud and cybersecurity services) in IT procurement. [Sec. 1(c)(vi)(A)] This effort will be coordinated by the American Technology Council. [Sec. 1(c)(vi)(B)]

The White House, in coordination with DHS and other agencies, must submit a report to the President, within 60 days, on modernizing federal IT, including transitioning all agencies to consolidated network architectures and shared IT services – with specific mention of cybersecurity services. [Sec. 1(c)(vi)(B)-(C)] Defense and intel agencies must submit a similar report for national security systems within 150 days. [Sec. 1(c)(vii)]

Critical infrastructure cybersecurity:

Critical infrastructure includes power plants, oil and gas, financial system, other systems that would risk national security if damaged. The EO states that it is the government's policy to use its authorities and capabilities to support the cybersecurity risk management of critical infrastructure. [Sec. 2(a)]

The EO directs DHS, DoD, and other agencies to assess authorities and opportunities to coordinate with the private sector to defend critical infrastructure. [Sec. 2(b)]

DHS and DoC must submit a report to the President, within 90 days, on promoting market transparency of cyber risk management practices by critical infrastructure operators, especially those that are publicly traded. [Sec. 2(c)]

DoC and DHS shall work with industry to promote voluntary actions to improve the resiliency of internet and communications infrastructure and “dramatically" reduce the threat of botnet attacks. [Sec. 2(d)]

The EO reiterates the US government's commitment to an open, secure Internet that fosters innovation and communication while respecting privacy and guarding against disruption. [Sec. 3(a)]

Cabinet members must submit a report to the President, within 90 days, on options for deterring adversaries and protecting Americans from cyber threats. [Sec. 3(b)]

Cabinet members must report to the President, within 45 days, on international cybersecurity priorities, including investigation, attribution, threat info sharing, response, etc. The agencies must report to the President, within 90 days, on a strategy for international cooperation in cybersecurity. [Sec. 3(c)]

Agencies must report to the President, within 120 days, how to grow and sustain a workforce skilled in cybersecurity and related fields. [Sec. 3(d)(i)]

The Director of National Intelligence must report to the President, within 60 days, on workforce development practices of foreign peers to compare long-term competitiveness in cybersecurity. [Sec. 3(d)(ii)] Agencies must report to the President, within 150 days, on US efforts to maintain advantage in national-security-related cyber capabilities. [Sec. 3(d)(iii)]

The Executive Order is just the start

As you can see, the EO initially requires a lot of multi-agency reports, which we can expect to surface in coming months, and which can then be used to craft official policy. There are opportunities for the private sector to provide input to the agencies as they develop those reports, though the 2-4 month timelines are pretty tight for such complex topics. But the reports are just the beginning of long processes to accomplish the goals set forth in the EO - it will take a lot longer than 60 days, for example, to fully flesh out and implement a plan to modernize federal IT. The long haul is beginning, and we won't know how transformative or effective this process will be for some time.

Building on this, we recently joined forces with other members of the security community to urge NIST and NTIA (both part of the U.S. Dept. of Commerce) to promote adoption of coordinated vulnerability disclosure processes. In each of these two most recent filings, Rapid7 was joined by a coalition of approximately two dozen (!!) like-minded cybersecurity firms, civil society organizations, and individual researchers.

Joint comments to the National Institute of Standards and Technology (NIST) Cybersecurity Framework, available here.

Joint comments to the National Telecommunications and Information Administration's (NTIA) "Green Paper" on the Internet of Things, available here.

The goal of the comments is for these agencies to incorporate coordinated vulnerability disclosure and handling processes into official policy positions on IoT security (in the case of NTIA) and cybersecurity guidance to other organizations (in the case of NIST). We hope this ultimately translates to broader adoption of these processes by both companies and government agencies.

What are "vuln disclosure processes" and why are they important?

Okay, first off, I really hope infosec vernacular evolves to come up with a better term than "coordinated vulnerability disclosure and handling processes" because boy that's a mouthful. But it appears to be the generally agreed-upon term.

A coordinated vulnerability disclosure and handling process is basically an organization's plan for dealing with security vulnerabilities disclosed from outside the organization. They are formal internal mechanisms for receiving, assessing, and mitigating security vulnerabilities submitted by external sources, such as independent researchers, and communicating the outcome to the vulnerability reporter and affected parties. These processes are easy to establish (relative to many other security measures) and may be tailored for an organizations' unique needs and resources. Coordinated vulnerability disclosure and handling processes are not necessarily "bug bounty programs" and may or may not offer incentives, or a guarantee of protection against liability, to vulnerability reporters.

Why are these processes important? The quantity, diversity, and complexity of vulnerabilities will prevent many organizations from detecting all vulnerabilities without independent expertise or manpower. When companies are contacted about vulnerabilities in their products or IT from unsolicited third parties, having a plan in place to get the information to the right people will lead to a quicker resolution. Security researchers disclosing vulnerabilities are also better protected when companies clarify a process for receiving, analyzing, and responding to the disclosures – being prepared helps avoid misunderstandings or fear that can lead to legal threats or conflicts.

To catch vulnerabilities they might otherwise overlook, businesses and government agencies are increasingly implementing vulnerability disclosure and handling processes, but widespread adoption is not yet the norm.

NIST Framework comments

The NIST Framework is a voluntary guidance document for organizations for managing cybersecurity risks. The Framework has seen growing adoption and recognition, and is an increasingly important resource that helps shape cybersecurity implementation in the public and private sectors. NIST proposed revisions to the Framework and solicited comments to the revisions.

Specifically, our comments recommended that the Framework Core include a new subcategory dedicated to vulnerability disclosure processes, and to build the processes into existing subcategories on risk assessment and third party awareness. Our comments also recommended revising the "external participation" metric of the Framework Tiers to lay out a basic maturity model for vulnerability disclosure processes.

NTIA Internet of Things "Green Paper" comments

NTIA issued a “Green Paper” in late 2016 to detail its overall policies with regard to the Internet of Things, and then they solicited feedback and comments on that draft. Although the Dept. of Commerce has demonstrated its support for vulnerability disclosure and handling processes, there was little discussion about this issue in the Green Paper. The Green Paper is important because it will set the general policy agenda and priorities for the Dept. of Commerce on the Internet of Things (IoT).

In our joint comments, the coalition urged NTIA to include more comprehensive discussion vulnerability disclosure and handling processes for IoT. This will help clarify and emphasize the role of vulnerability disclosure in the Dept. of Commerce's policies on IoT security going forward.

The comments also urged NTIA to commit to actively encouraging IoT vendors to adopt vulnerability disclosure and handling processes. The Green Paper mentioned NTIA's ongoing "multistakeholder process" on vulnerability disclosure guidelines, which Rapid7 participates in, but the Green Paper did not discuss any upcoming plans for promoting adoption of vulnerability disclosure and handling processes. Our comments recommended that NTIA promote adoption among companies and government agencies in IoT-related sectors, as well as work to incorporate the processes into security guidance documents.

More coming

Rapid7 is dedicated to strengthening cybersecurity for organizations, protecting consumers, and empowering the independent security research community to safely disclose vulnerabilities they've discovered. All these goals come together on the issue of coordinated vulnerability disclosure processes. As we increasingly depend on complex and flawed software and systems, we must pave the way for greater community participation in security. Facilitating communication between technology providers and operators and independent researchers is an important step toward greater collaboration aimed at keeping users safe.

Rapid7 is thrilled to be working with so many companies, groups, and individuals to advance vulnerability disclosure and handling processes. As government agencies consider how cybersecurity fits into their missions, and how to advise the public and private sectors on what to do to best protect themselves, we expect more opportunities to come.

What follows are some first impressions on the contents of the WikiLeaks Vault7 dump. I won't be addressing the legal or ethical concerns about posting classified data that can endanger the missions and goals of American intelligence organizations. I also won't be talking about whether or not the CIA should

What follows are some first impressions on the contents of the WikiLeaks Vault7 dump. I won't be addressing the legal or ethical concerns about posting classified data that can endanger the missions and goals of American intelligence organizations. I also won't be talking about whether or not the CIA should be involved in developing cyber capabilities in the first place as we have previously written about our views on this topic. But, I will talk about the technical content of the documents posted today, which all appear to come from a shared, cross-team internal Confluence wiki used by several CIA branches, groups, and teams.

After spending the last few hours poring over the newly released material from WikiLeaks, Vault7, I'm left with the impression that the activities at the CIA with regards to developing cyber capabilities are... pretty normal.

The material is primarily focused on the capabilities of "implants" -- applications that are installed on systems after they've been compromised -- and how they're used to exfiltrate data and maintain persistence after an initial compromise of a variety of devices from Samsung smart TVs to Apple iPhones to SOHO routers, and everything in between.

The material also covers the command and control infrastructure that the CIA maintains to remotely use these implants; primarily, the details are concerned with building and testing the various components that makes up this network.

Finally, there are the projects that are focused on exploits. The exploits described are either developed in-house, or acquired from external partners. Most of the internally developed exploits are designed to escalate privileges once access is secured, while most of the remote capabilities were acquired from other intelligence organizations and contractors. The CIA does appear to prefer to develop and use exploits that have a local, physical access component.

While there is still a lot left to look at in detail, the overwhelming impression that I get from reading the material is that working on offensive tech at the CIA is pretty similar to working on any software project at any tech company. More to the point, the CIA activities detailed here are eerily similar to working on Metasploit at Rapid7. Take, for example, this post about the Meterpreter Mettle project from 2015 (which was written about the same time as these documents). Tell me that Mettle doesn't read like any one of the technical overviews in Vault7.

As we spend more time digging through the Vault7 material, and if more material is released over time, I expect we'll be less and less surprised. So far, these documents show that the CIA branches and subgroups named in the documents are behaving pretty much exactly as one might expect of any software development shop. Yes, they happen to be developing exploit code. But, as we all know, that particular capability, in and of itself, isn't novel, illegal, or evil. Rapid7, along with many other security research organizations, both public and private, do it every day for normal and legitimate security purposes.

Until I see something that's strikingly unusual, I'm having a hard time staying worked up over Vault7.

]]>

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them.

On the seventh day of Haxmas, the Cyber gave to me: a list of seven Rapid7 comments to government policy proposals! Oh, tis a magical season.

It was an active 2016 for Rapid7's policy team. When government agencies and commissions proposed rules or guidelines affecting security, we often submitted formal "comments" advocating for sound cybersecurity policies and greater protection of security researchers. These comments are typically a cross-team effort, reflecting the input of our policy, technical, industry experts, and submitted with the goal of helping government better protect users and researchers and advance a strong cybersecurity ecosystem.

Below is an overview of the comments we submitted over the past year. This list does not encompass the entirety of our engagement with government bodies, only the formal written comments we issued in 2016. Without further ado:

1. Comments to the National Institute for Standards and Technology (NIST), Feb. 23: NIST askedfor public feedback on its Cybersecurity Framework for Improving Critical Infrastructure Cybersecurity. The Framework is a great starting point for developing risk-based cybersecurity programs, and Rapid7's comments expressed support for the Framework. Our comments also urged updates to better account for user-based attacks and ransomware, to include vulnerability disclosure and handling policies, and to expand the Framework beyond critical infrastructure. We also urged NIST to encourage greater use of multi-factor authrntication and more productive information sharing. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-nist-fr amework-022316.pdf

4. Comments to the Dept. of Commerce's National Telecommunications and Information Administration (NTIA), Jun. 1: NTIA asked for public comments for its (forthcoming) "green paper" examining a wide range of policy issues related to the Internet of Things. Rapid7's comprehensive comments detailed – among other things – specific technical and policy challenges for IoT security, including insufficient update practices, unclear device ownership, opaque supply chains, the need for security researchers, and the role of strong encryption. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-ntia-in ternet-of-things-rfc-060116.pdf

5. Comments to the President's Commission on Enhancing National Security (CENC), Sep. 9: The CENC solicited comments as it drafted its comprehensive reporton steps the government can take to improve cybersecurity in the next few years. Rapid7's comments urged the government to focus on known vulnerabilities in critical infrastructure, protect strong encryption from mandates to weaken it, leverage independent security researchers as a workforce, encourage adoption of vulnerability disclosure and handling policies, promote multi-factor authentication, and support formal rules for government disclosure of vulnerabilities. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-cenc-rf i-090916.pdf

7. Comments to the National Highway Traffic Safety Administration (NHTSA), Nov. 30: NHTSA asked for comments on its voluntary best practices for vehicle cybersecurity. Rapid7's comments recommended that the best practices prioritize security updating, encourage automakers to be transparent about cybersecurity features, and tie vulnerability disclosure and reporting policies to standards that facilitate positive interaction between researchers and vendors. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-nhtsa-c ybersecurity-best-practices-for-modern-v…

2017 is shaping up to be an exciting year for cybersecurity policy. The past year made cybersecurity issues even more mainstream, and comments on proposed rules laid a lot of intellectual groundwork for helpful changes that can bolster security and safety. We are looking forward to keeping up the drumbeat for the security community next year. Happy Holidays, and best wishes for a good 2017 to you!