Not that long ago, cybersecurity was viewed as primarily a technical issue, to be handled by a company’s IT department. But times are changing—at least somewhat. The rise of robust regulatory frameworks related to data privacy and cybersecurity have led to increased compliance risk and potential liability. As a result, lawyers are increasingly required to provide both legal and strategic advice on a range of cybersecurity-related issues, and to work cooperatively with departments throughout their companies to manage the legal, financial, operational, and reputational challenges associated with cybersecurity.

The article seeks to provide a practical overview of the evolving role of in-house counsel of financial institutions in managing cyber risk and achieving cybersecurity compliance. It identifies and discusses four areas in which counsel can be expected to play a key role: (1) cybersecurity governance and regulatory compliance; (2) incident response; (3) managing vendor risk; and (4) mergers and acquisitions transactions. For U.S. corporations—particularly those in the financial services industry—cybersecurity risks pose existential threats. This article aims to address some of the challenges faced by in-house counsel working to protect and defend their corporations, each day, from these risks.

]]>New York Law Journal Publishes Avi Gesser’s Article on the Role of In-House Counsel in Cybersecurity Incident Response Planninghttps://www.dpwcyberblog.com/2019/03/new-york-law-journal-publishes-avi-gessers-article-on-the-role-of-in-house-counsel-in-cybersecurity-incident-response-planning/
Tue, 05 Mar 2019 15:00:38 +0000https://www.dpwcyberblog.com/?p=1121Continue Reading]]>Avi Gesser co-authored an article with Davis Polk associate Matthew Kelly and law clerk Samantha Pfotenhauer that was published in the New York Law Journal on March 1, 2019. The article addresses the role of in-house counsel in preparing for and responding to cybersecurity incidents.
]]>New Amendment Would Significantly Expand Liability Under California Consumer Privacy Acthttps://www.dpwcyberblog.com/2019/02/new-amendment-would-significantly-expand-liability-under-california-consumer-privacy-act/
Thu, 28 Feb 2019 13:20:54 +0000https://www.dpwcyberblog.com/?p=1111Continue Reading]]>A recent bill to amend California’s landmark data privacy law seeks to expand potential liability for violations—bringing little comfort to those already concerned about the risks and challenges associated with achieving compliance in advance of the law’s upcoming effective date.

The proposal—Senate Bill 561, introduced on February 25, 2019, by California Attorney General Xavier Becerra and Senator Hannah-Beth Jackson—would amend the California Consumer Privacy Act (“CCPA”) to expand the scope of the consumer private right of action and to remove a notice-and-cure safe harbor from Attorney General enforcement.

The CCPA is due to become effective January 1, 2020. Among other things, the CCPA establishes a consumer right to request details from covered businesses about the collection of personal information, the purpose of such collection, and third parties with whom the information has been or may be shared. Covered businesses are also required to delete personal information upon request (subject to certain exceptions); must disclose certain information regarding their sale of consumer data; and must provide consumers the right to opt out of having their information sold, without discriminating against those who do opt out.

The CCPA applies to any business that (i) collects personal information about consumers, defined as natural persons who are California residents; (ii) does business in California; and (iii) meets at least one of three criteria: (a) has annual gross revenues exceeding $25 million; (b) buys, receives, sells or shares the personal information of 50,000 or more consumers, households or devices annually; or (c) derives 50 percent or more of its annual revenue from selling consumers’ personal information. Like Europe’s GDPR, the CCPA defines “personal information” broadly, although the definitions under the two rules are not identical, as the CCPA also encompasses information that can be linked to “household[s],” even if not to individual consumers.

In its current form, the CCPA provides a private right of action only for consumers whose nonencrypted personal information is stolen or leaked as a result of a business’s failure to implement and maintain reasonable security procedures and practices. Remedies available to consumers under the rule are the greater of actual damages or statutory damages of $100 to $750, but notice and a 30-day opportunity to cure must be provided to the business before a consumer may seek statutory damages. Violations of other provisions of the act are subject to enforcement only by the California Attorney General, who may bring an action for a civil penalty of up to $2,500 per violation or $7,500 per intentional violation. Actions by the Attorney General for violations of the act are also subject to a 30-day notice-and-opportunity-to-cure requirement.

Senate Bill 561 proposes the following changes to the CCPA:

Expanding the consumer private right of action to allow consumers to bring suit for any violation of the CCPA, rather than only for theft or leakage of personal information due to the failure to maintain reasonable security precautions;

Eliminating the 30-day notice-and-opportunity-to-cure requirement before the California Attorney General may bring an action for a violation (but leaving in place the 30-day cure period for the private right of action); and

Removing the California Attorney General’s obligation to provide guidance opinions in response to requests from businesses on CCPA compliance, such that the Attorney General would only be permitted (but not required) to publish materials providing “general” guidance on compliance.

If adopted, this amendment could substantially increase potential liability for businesses that violate the CCPA and eliminate some current safe harbors. The proposed amendment does not address other potential issues within the CCPA, including the breadth of its definition of “personal information” or the lack of a distinction between sensitive and nonsensitive personal information.

With the effective date of the CCPA less than one year away, covered businesses should already be active in preparing for compliance. Davis Polk partner Avi Gesser’s advice for preparation is featured in a Cybersecurity Law Report article on considerations for compliance plans and misconceptions surrounding the CCPA.

We will be monitoring updates to the CCPA closely here at the Davis Polk Cyber Blog and will post regularly on any significant developments.

The authors gratefully acknowledge the assistance of law clerk Stephen Rettger in preparing this entry.

]]>Federal Privacy Legislation Is Coming. Maybe. Here’s What It Might Includehttps://www.dpwcyberblog.com/2018/11/federal-privacy-legislation-is-coming-maybe-heres-what-it-might-include/
Tue, 20 Nov 2018 21:46:57 +0000https://www.dpwcyberblog.com/?p=1061Continue Reading]]>Momentum is building for federal data privacy legislation, in large part due to the passage of the California Consumer Privacy Act (CCPA) (which goes into effect in 2020) and other states enacting or considering their own consumer privacy laws. These developments have businesses concerned that they will face a patchwork of inconsistent and onerous state privacy laws, which is currently the case with breach notification. Many leading tech companies, trade groups, and the U.S. Chamber of Commerce have voiced support for a national privacy law. On top of these domestic considerations, the EU’s General Data Protection Regulation (“GDPR”), a sweeping privacy law that affects many U.S. companies conducting business in the EU, is also now in effect. Several legislative proposals have been put forward in Congress, and we are starting to see the broad outlines of a potential law. But for many of the details, there is still nothing close to a consensus. Here are some of the issues that will likely be the subject of the most intense debate in the next congressional term:

Scope: Will federal legislation apply to all businesses, or be limited to firms operating primarily in the internet ecosystem? Will it exempt entities already subject to sector-specific privacy laws, like the CCPA, which does not apply to information collected, processed, sold, or disclosed pursuant to the federal Gramm-Leach-Bliley Act? Will it only apply to firms of a certain size, and if so, what will be the threshold? The CCPA applies to for-profit entities that meet one of the following criteria: (1) earns $25 million or more in annual revenue; (2) holds the personal data of at least 50,000 Californian consumers, households, or devices; or (3) derives at least half of its revenue from selling Californians’ personal data. We think that any final federal privacy law will have at least some of these scope limitations.

Transparency: There is broad consensus that the law will require companies to provide greater transparency to customers on personal data practices. Although companies and consumer groups may disagree at the margins, we expect that a federal privacy law will mandate clear, intuitive disclosures over the categories of personal information collected and how that data is collected, stored, used, and shared with third parties.

Consent/Opt-In/Opt-Out: One of the most hotly contested issues is whether and how companies will need to obtain consent from customers for certain uses of their data, and whether customers will be required to opt in to, or be allowed to opt out of, certain company practices. Take, for example, the sale of personal information to third parties. Companies will likely have to disclose this practice to consumers. But what if customers don’t want their data sold? Will companies be prohibited from selling those customers’ information for advertising purposes unless those individuals opt in via express consent, or will companies be able to sell their data unless those customers affirmatively opt out? And if customers do opt out, will companies be allowed to deny them any goods or services as a result?

The CCPA provides one model: it offers a limited opt-out right to consumers for the sale of any personal information to third parties, and although it prohibits covered businesses from denying services to consumers who opt-out, it does permit those businesses to offer consumers financial incentives not to opt out. A bipartisan federal proposal introduced by Senators Klobuchar (D-Minn.) and Kennedy (R-La.) last spring would grant consumers an opt-out from collection, use, and sharing of personal data, and permit the business to restrict or deny service only if the consumer’s opt-out renders the service inoperable.

One approach for federal legislation could be to authorize some data uses without any opt-in or opt-out right (e.g., where a business’s “legitimate interest” in that use is not outweighed by the customer’s interests in protecting the data from misuse), but provide that, for certain uses of particularly sensitive categories of data, customers may opt out and businesses would not be allowed to completely deny those customers service on that basis.

Right to Know/Right to Be Forgotten: There is general agreement that federal privacy legislation will include a right of consumers to know what data companies have related to them. The right to require companies to correct or delete data is more controversial, in part because of the many circumstances in which companies have a legitimate need or legal obligation to maintain customer data, and in part because of First Amendment considerations. In the end, we think that some limited right to have personal information corrected/deleted will be included in any final legislation.

Harm Threshold: For breach notification laws, some states require notification to people whose personal data has been subject to a breach only if there is a risk of harm to those individuals, while other states require notification regardless of the risk of harm. For privacy obligations, businesses generally favor obligations based on actual harm or at least the risk of actual harm to consumers. Consumer advocates, however, argue that limiting privacy rights and remedies to conduct involving actual harm will mean that only a provable financial loss or physical injury will entitle affected consumers to relief, while other damages that are more difficult to prove, like embarrassment, anxiety, and loss of dignity, are often the real harms caused by data breaches, and they will go uncompensated. These groups support a “rights-based” approach to privacy, where harm is assumed (or at least presumed) with a broad definition of personal information. This issue will be vigorously contested, and the federal legislation may incorporate some elements of both approaches. For example, the law may provide for enforcement with civil penalties absent proof of actual harm to consumers, but only in situations in which harm may be presumed, such as where the violation involves an unknown third party gaining unauthorized access to sensitive personal information (e.g., medical history, online banking passwords, complete credit card data, etc.).

Preemption of State Law: Perhaps the most contentious issue will be whether (and to what extent) the federal privacy law preempts state law privacy and data security legislation. The business community is likely to insist on preemption of state laws. It argues that the patchwork of targeted federal laws and inconsistent state privacy laws already imposes excessive compliance burdens, which will only increase as states enact their own versions of the CCPA. Consumer groups have opposed preemption, arguing that the federal government alone cannot provide adequate protection to consumers’ privacy rights, and that states should be empowered to innovate and provide greater privacy protections to their residents. Congressional Democrats have indicated that they are not willing to “replace a progressive California law” through preemption with a “nonprogressive federal law.” Given the high priority placed on this issue from the business community, it is likely that any final law will include at least some degree of state law preemption. A federal law that replaces a patchwork of inconsistent state laws with strong privacy protections for consumers could satisfy most stakeholders.

Data Security and Breach Notification: Closely related to the preemption issue is data security and breach notification. Currently, each state has its own separate data breach notification regime, and more than a dozen states also have substantive data security requirements. Businesses are eager to reduce their compliance burdens by harmonizing their data security and breach notification obligations across the United States, and are therefore advocating for including these requirements into federal privacy legislation, so long as they would be flexible and would have preemptive effect over state laws. It is hard to predict how this will play out, but one possibility is that state law preemption for breach notification would not garner enough support if it waters down existing obligations under some state laws, nor would it gain enough support if it is more onerous than existing state laws, so preemption would be limited to privacy (and perhaps data security) obligations, but not breach notification.

Government Enforcement: As a potential compromise for state law preemption, federal privacy legislation may grant authority to state attorneys general to enforce violations of the federal law, in addition to, and in coordination with, a federal agency (likely the FTC). Given that the business community seems relatively open to this possibility of dual state and federal enforcement, we expect that state AGs will have at least some enforcement rights under a comprehensive federal law.

Private Right of Action: Under the CCPA, individuals can bring a private right of action where a certain subset of sensitive information has been improperly accessed. Industry is generally opposed to such provisions. In contrast, consumer groups have asserted that such a right is critical (along with prohibitions on compulsory arbitration). Recent federal proposals (including from Democrats) have not provided for this right, so we believe that it is unlikely to be included in any final legislation, especially if there is enforcement by state attorneys general and fining authority for violations by the FTC.

Certifications: Another controversial issue is whether company executives will be required to certify compliance with applicable privacy requirements, as is the case for the New York Department of Financial Services cyber rules, and if so, what the consequences of an inaccurate certification will be. Senator Wyden’s proposed legislation includes a provision for criminal penalties for executives who provide knowingly false certifications. Our view is that executive certifications, especially ones that could result in criminal penalties, are unlikely to be part of any final legislation.

We will be monitoring developments in federal privacy laws closely here at the Davis Polk Cyber Blog and will post regularly on any significant developments.

This piece has also been published on the Compliance & Enforcement blog, run by the Program on Corporate Compliance and Enforcement (PCCE) at NYU School of Law.

]]>SEC Penalizes Cybersecurity Weaknesshttps://www.dpwcyberblog.com/2018/10/sec-penalizes-cybersecurity-weakness/
Tue, 23 Oct 2018 12:35:31 +0000https://www.dpwcyberblog.com/?p=1049Continue Reading]]>A recent SEC Order should be a reminder to registered entities, including small- and medium-sized firms, that the SEC is monitoring the reasonableness of their cybersecurity policies and procedures, and that it may take action in the event of a breach, even in the absence of economic harm.

The SEC’s $1 million settlement with broker-dealer and registered investment adviser Voya Financial Advisors Inc. followed the theft of personally identifiable information of thousands of Voya’s customers. The Order is the first settled SEC action to include a violation of the Identity Theft Red Flags Rule (Rule 201 of Reg S-ID), which Dodd-Frank assigned to the SEC in 2011. The case also extends the SEC’s existing pattern of bringing actions under the Safeguards Rule (Rule 30(a) of Reg S-P) against registered entities—including R.T. Jones Capital Management and Craig Scott Capital—that the SEC views as having failed to take reasonable measures to protect their data against evolving cyber threats.

As part of the settlement, Voya agreed to retain an independent consultant to review and make recommendations regarding Voya’s policies and procedures for compliance with Reg S-ID and Reg S-P.

According to the Order, over six days in April 2016, attackers exploited gaps in Voya’s technical support procedures to obtain usernames and passwords for Voya’s consultant web portal, through which they accessed and stole Voya’s confidential customer data.

The attackers, posing as Voya consultants, called Voya’s technical support line three times and obtained temporary passwords for the consultants’ portal accounts. On two of these three occasions, the support staff also provided the associated account usernames, against company policy.

Voya had known it was a target for “vishing” (voice phishing) attempts, and had maintained a list of numbers associated with prior fraudulent activity. But Voya did not require its support staff to check that list when providing password information. As a result, they failed to detect that the attackers had twice called using a number previously flagged for fraudulent activity.

Hours after the first call, the real account holder notified Voya that he had received an unprompted password reset confirmation email. The issue was escalated to Voya’s Incident Response Team.

But before Voya had a chance to alert the rest of its staff and tell them not to provide temporary passwords by phone, the attackers had called again and obtained a second temporary password using the same method. And then, despite the alert and the instructions given, the attackers obtained yet another password from the support line soon after.

Meanwhile, even after identifying the malicious activity, including the attackers’ IP addresses, Voya’s Incident Response Team did not take steps to block access to affected accounts, to terminate ongoing web sessions, or block traffic from the attackers’ IP addresses.

Using the portal login information, the attackers were able to access at least 5,600 Voya customers’ personally identifiable information, including the full Social Security or government-issued identification numbers for at least 2,000 customers. The Order notes that there were no known unauthorized transfers of funds or securities from customer accounts as a result of the attack.

The SEC found that Voya had willfully violated both the Safeguards Rule and the Identity Theft Red Flags Rule. The Safeguards Rule generally requires broker-dealers and investment advisers registered with the SEC to adopt written policies and procedures that are reasonably designed to safeguard customer records and information. The Red Flags Rule requires certain registered broker-dealers and investment advisers to develop and implement a written identity theft prevention program that is designed to detect, prevent and mitigate identity theft in connection with the opening of certain accounts.

The SEC found that Voya violated the Safeguards Rule because its cybersecurity policies and procedures to protect customer information and to respond to cybersecurity incidents were not reasonably designed to meet those purposes. Among other technical and operational deficiencies, the SEC noted that Voya did not have reasonable practices with respect to resetting contractor representatives’ passwords, terminating contractor web sessions in the portal, applying controls to consultant accounts, identifying high-risk accounts for additional security measures, or blocking IP addresses associated with known malicious activity.

The SEC found that Voya violated the Red Flags Rule because it did not review and update its 2009 Identity Theft Prevention Program in response to changes in the threat environment and did not provide adequate training to its employees. The SEC also found that Voya’s program did not include reasonable policies and procedures to respond to identity theft red flags, such as those that were detected in the course of the April 2016 intrusion.

In the Voya Order, the SEC is once again putting the industry on notice that it is monitoring the reasonableness of firms’ cybersecurity policies and procedures, that it will assess those programs using a highly fact-specific standard, and that it will expect them to respond effectively to the ever-evolving threats faced by the industry. Registered entities, including broker-dealers and investment advisers, should consider revisiting their programs related to the protection of personally identifiable information and other sensitive data (including, where applicable, Identity Theft Prevention Programs) on a regular basis.

The Davis Polk Cyber Portal is now available to assist our clients in their efforts to maintain compliance with their cybersecurity regulatory obligations. If you have questions about the Portal, please contact avi.gesser@davispolk.com.

]]>Cybersecurity Vendor Due Diligence—Some Practical Tips from the Front Lineshttps://www.dpwcyberblog.com/2018/09/cybersecurity-vendor-due-diligence-some-practical-tips-from-the-front-lines/
Tue, 11 Sep 2018 14:07:29 +0000https://www.dpwcyberblog.com/?p=1020Continue Reading]]>Some of the most significant recent cyber breaches originated at vendors. We have previously discussed the importance of effective oversight of third parties because vendor breaches can lead to regulatory actions for companies. Indeed, recent regulatory guidance provides that vendor diligence is an essential part of any cybersecurity program. This makes sense; there is no point in spending time and resources protecting the data on your network if that same data is unprotected at a vendor. The NYDFS cybersecurity rules require, by March 1, 2019, a vendor diligence program that includes procedures to identify and assess vendor risks, policies outlining the “minimum cybersecurity practices” required of vendors, due diligence procedures to evaluate the vendor’s cybersecurity practices, and procedures to complete periodic tests of the risks and cybersecurity practices of vendors. Other regulators and self-regulatory organizations that have emphasized the importance of vendor cyber diligence include the OCC, the SEC, FINRA, and the NFA.

Here are some things that companies are doing to manage their vendor cybersecurity risk:

General Approach to Vendors

Identifying the vendors that have access to their system or their confidential data.

Placing vendors into different risk categories based on the nature and quantity of nonpublic company information to which they have access.

Creating a policy with specific cybersecurity requirements, audit rights, and cooperation rights for each category of vendor with access to their data with the goal of obtaining vendor buy-in.

Over time—through negotiation, consolidation, selection of alternative vendors and bringing services in-house—reducing the number of vendors that (1) have access to their sensitive data, and (2) do not meet their vendor goals.

Sample questions for specific vendors to assess their cyber risk

Do you have any cybersecurity certifications?

Do you comply with any applicable guidance or regulations, such as NYDFS, GDPR or NIST?

Do you have a Chief Information Security Officer (CISO)?

To whom does he or she report?

Are you covered by any cybersecurity insurance?

What is covered and what is the deductible?

Describe the access control measures, physical and digital, that you use to restrict employees to the electronic data necessary for their business functions.

Do you employ network segmentation?

Do you monitor activity of authorized users to detect unusual downloading, copying, or altering of nonpublic information?

Do you require two-factor authentication for remote access into the Company’s computer system?

Do you allow the use of removable data storage devices?

Do you allow employees to use company data on personal smartphones?

Are personal and company data segregated on the device?

Are laptops encrypted? Is there a means to remotely wipe data on a lost or stolen phone?

What are your data encryption policies?

What is your password management policy?

What cybersecurity training do you offer to your employees?

Do you have a written incident response plan? Has it been tested?

Do you maintain written disaster recovery and business continuity plans?

How often are these plans updated and tested?

Have you experienced a cybersecurity event in the past two years?

What happened and what is the current status of any remediation efforts?

Have you undergone a cybersecurity risk assessment or a penetration test in the last 12 months? If so, who conducted the test and were all of the recommendations implemented?

How will you cooperate in incident preparedness?

Will you agree to permit the Company to review your cyber policies, procedures and training?

Will you allow the Company to arrange for cybersecurity audits, or will you conduct your own audits and agree to share the results with the Company?

How will you cooperate during incident response?

Will you agree to notify the Company of any data incident concerning the Company’s data within a set time period (e.g., 24 hours) after discovery?

Will you agree to provide all reasonable assistance to the Company with any investigation into a cybersecurity incident affecting the Company’s data?

Will you agree to deliver to the Company any devices, or copies of the contents of any devices, that may be relevant to an incident involving the Company’s data within a certain period of time following a request?

Will you agree to coordinate with the Company on any external communications relating to a cyber incident that involves the Company’s data?

As the NYDFS acknowledges in its FAQ on third-party cybersecurity due diligence, there is no “one-size-fits-all solution,” and companies need to take a risk-based approach to figuring out what obligations they will impose on their vendors to ensure that all their efforts to secure their data won’t be undone by their vendors’ failure to follow suit.

The Davis Polk Cyber Portal is now available to assist our clients in their efforts to maintain compliance with their cybersecurity regulatory obligations. We have a section on the Portal dedicated to Vendor Due Diligence. If you have questions about the Portal, please contact avi.gesser@davispolk.com.

]]>With the Sedona Report, Companies Get Some Helpful Guidance on How to Get Rid of Large Volumes of Old Datahttps://www.dpwcyberblog.com/2018/08/with-the-sedona-report-companies-get-some-helpful-guidance-on-how-to-get-rid-of-large-volumes-of-old-data/
Mon, 13 Aug 2018 13:53:12 +0000https://www.dpwcyberblog.com/?p=951Continue Reading]]>We have written here before about the challenges and benefits of getting rid of old data. As we have noted, in light of recent legal, regulatory, and technological developments, companies should reevaluate their long-term data management planning. Last week, the New York Department of Financial Services (“NYDFS”) issued a reminder that by September 4, 2018, covered entities must have a policy for disposing nonpublic information that is no longer necessary for business operations or for other legitimate business purposes, unless required to be retained by law or regulation. GDPR also requires companies to minimize the amount of personal data that they store to what is necessary. At the same time, the case law that has developed under the new Federal Rules of Civil Procedure on spoliation has significantly reduced the risk of sanctions resulting from accidental deletion of electronic materials that might be relevant to a litigation or investigation. But despite these developments, companies operating in the U.S. still have little guidance on how to balance the costs and risks of deleting large volumes of data with the long-term costs and risks of keeping it.

For this reason, we see the recent release of the Sedona Conference: Principles and Commentary on Defensible Disposition, as a watershed moment for data minimization in the United States. The Sedona Conference is one of the nation’s premier non-partisan, non-profit law-and-policy think tanks, whose publications have been relied upon as authoritative by courts when faced with novel data issues. The Sedona Paper begins with the core principle acknowledged in Sedona’s 2014 Commentary on Information Governance: The effective, timely, and consistent disposal of physical and electronic information that no longer needs to be retained should be a core component of any Information Governance program.

The Paper builds on this statement with the following three new principles:

PRINCIPLE 1. Absent a legal retention or preservation obligation, organizations may dispose of their information.

PRINCIPLE 2. When designing and implementing an information disposition program, organizations should identify and manage the risks of over-retention.

PRINCIPLE 3. Disposition should be based on Information Governance policies that reflect and harmonize with an organization’s information, technological capabilities, and objectives.

In the guidance and commentary accompanying these principles, the Paper makes several compelling arguments for data minimization, many of which echo similar arguments that we’ve made here over the last year:

When considering whether to implement a data minimization program, and the scope of any such program, companies should give serious consideration to the long-term costs and risks of keeping data, including:

the projected overall growth in the size of the company’s data over the next 5-10 years, and the associated storage costs

lost productivity associated with searching large volumes of irrelevant data

the cybersecurity and privacy risks of having large volumes of unneeded data, especially considering GDPR-type rights of erasure

internal audit and compliance risks

contractual risks (e.g., obligations to clients and customers to delete data once it is no longer needed)

potential, but not yet reasonably anticipated, litigation or regulatory inquiries.

If there is no legal retention obligation, information should be disposed as soon as the cost and risk of retaining the information outweighs any likely business value of retaining the information.

Typically, as information ages, its business value decreases, and the cost and risk of keeping it increases.

Absent a legal obligation to retain certain documents, companies may dispose of those documents, even if an obligation to keep those documents arises at some point in the future.

Data minimization programs that target narrow categories of documents or a small group of custodians carry greater risk than programs that are generally applicable.

Data minimizations programs that are not enforced broadly lead to selective disposal and thereby increased risk.

Regular data minimization programs may need to be suspended due to legal hold requirements, but those programs should ensure that routine disposal of documents resumes promptly when the legal hold requirements are lifted.

reducing the time and effort required to identify potentially relevant information,

reducing the cost of searching and analyzing large, and often outdated, data sources,

reducing the cost of implementing and monitoring document preservation obligations,

reducing the number of documents to be collected, processed and reviewed,

reducing the risk that relevant documents will be lost or missed in a sea of irrelevant documents.

The Paper is subject to public comment, but it provides a helpful roadmap for a sensible and effective data disposal program. Still, implementation remains tricky. Companies face a web of overlapping local, federal, and international document preservation obligations, along with their legal hold obligations associated with lawsuits and regulatory inquiries. No company is going to pay someone to actually review millions of old documents to separate the ones that need to be preserved from the ones that can be deleted.

But that separation can be done efficiently, and in a cost-effective manner, through careful planning and the utilization of advanced data management software and data analytics. These issues, along with a step-by-step approach to responsible document deletion, are discussed further in the below webcast.

]]>Cybersecurity and Vulnerability Assessments: Evolving Law on Hacking and Extortion in the Age of Bug Bountieshttps://www.dpwcyberblog.com/2017/11/evolving-law-on-hacking-and-extortion-in-the-age-of-bug-bounties/
Mon, 20 Nov 2017 16:09:10 +0000https://www.cyberbreachcenter.com/?p=576Continue Reading]]>Companies and law enforcement are increasingly turning to white hat hackers for help. The FBI apparently paid consultants over $1,000,000 to unlock an iPhone used by one of the shooters in the San Bernardino attacks, and companies such as Microsoft, Uber, Facebook, and Google are paying hackers tens of thousands of dollars to find vulnerabilities in their systems. Davis Polk’s recent cybersecurity webcast discusses why companies are using pools of white hat hackers for certain vulnerability assessments, and how to reduce the risks associated with such “bug bounty” programs. In the one-hour discussion, which is now available below, we cover:

The new DOJ guidelines on bug bounty vulnerability assessments.

When using a bug bounty to test cybersecurity measures makes sense.

Contractual and structural suggestions for an effective bug bounty program.

The line between lawful and unlawful hacking.

When negotiation demands from white hat hackers cross the line into extortion.

Legal options for responding to an extortion demand from a hacker.

To learn more about the risks and benefits of white hat hacking, see our webcast below:

To avoid ending up in the news as the latest victim of a cyber-attack, companies are looking to improve their data security. One way is data reduction─getting rid of old data that you don’t need for business purposes and you are not legally required to keep. The less data you have, the easier it is to protect.

New guidance from the FTC, and recent regulations by the NYDFS, make the express connection between data minimization and cybersecurity. The NYDFS cybersecurity rules provide, that by September 1, 2018, covered entities must have “policies and procedures for the secure disposal on a periodic basis of any Nonpublic Information . . . that is no longer necessary for business operations or for other legitimate business purposes of the Covered Entity, except where such information is otherwise required to be retained by law or regulation . . .”

The case law that has developed under the new Federal Rules of Civil Procedure on spoliation has reduced the risk of sanctions resulting from accidental deletion of electronic materials that might be relevant to a litigation. But taking millions of electronic documents and sorting those that need to be kept for legal reasons from those that can be deleted has, until recently, been so costly and complicated that few companies have even tried.

However, recent advances in data analytics and machine learning are creating opportunities for companies to responsibly delete large volumes of old data, without having to review each document to make sure it is not subject to a legal hold.

These tricky issues, along with a step-by-step approach to responsible document deletion, are discussed in the recent webcast below.