Lots of agencies and organizations want to boss you around about cybersecurity. In April, the SEC and the Justice Department published more directions on the issue. We’ll cover the very brief guidance issued by the SEC’s Division of Investment Management first, and then turn to DOJ in a later post.

First, as with everyone else, the IM Division thinks cybersecurity is very, very important for investment companies and investment advisers.

Second, the staff recommended that advisers and funds consider a number of measures to strengthen cybersecurity:

·Conduct a periodic risk assessment.

·Create a strategy designed to prevent, detect and respond to cybersecurity threats. Specific pieces of the strategy could include: tiered access to sensitive information and network resources; data encryption; restricted use of removable storage media; and development of an incident response plan.

·Implement the strategy through written policies and procedures and training that provide guidance to officers and employees. Then monitor compliance.

·Assess whether protective cybersecurity measures are in place at relevant service providers.

This is a truncated list, and it isn’t magical. The suggestions could apply to literally any business. You can read the full version here, but FINRA is way ahead of the Investment Management Division in providing usable guidance on how to bolster cybersecurity.

Third, and more interestingly, the guidance suggests that funds and advisers should take their compliance obligations under the federal securities laws into account in assessing their ability to prevent, detect and respond to cyber attacks. So, maintaining a compliance program that is reasonably designed to prevent violations of the securities laws could also mitigate exposure to cyber threats, the guidance says. “For example, the compliance program of a fund or an adviser could address cybersecurity risk as it relates to identity theft and data protection, fraud, and business continuity, as well as other disruptions in service that could affect, for instance, a fund’s ability to process shareholder transactions.” In other words, if a cyber attack prevents you from, say, being able to process shareholder transactions, the staff is going to look back and see how well prepared you were before the assault. If you weren't prepared at all, the end result probably won't be pretty, for the shareholders or you.

The guidance recognizes that it’s impossible to anticipate and prevent every cyber attack. But it wants you to try. And appropriate planning could mitigate the impacts of those attacks, as well as help “compl[iance] with the federal securities laws.” Consider yourself warned.

I haven’t yet turned to a life of crime, so far be it from me to criticize actual criminals’ profit-maximizing strategies. It’s easy for me to nitpick, but I’m not the one strapping on my mask and trying to earn a (dis)honest dollar every day. But have a look at this Reuters story from Tuesday.

In it, we learn that the SEC and the Secret Service are investigating a sophisticated computer hacking group known as “FIN4” that allegedly “has tried to hack into email accounts at more than 100 companies, looking for confidential information on mergers and other market-moving events. The targets include more than 60 listed companies in biotechnologyand other healthcare-related fields, such as medical instruments, hospital equipment and drugs.” Apparently their plan is to harvest this information and then trade on it. Nobody knows where FIN4 is from. They could be overseas, but supposedly their English is flawless and they have a deep knowledge of how financial markets work, so maybe they’re in the United States. At one level, a little terrifying!

But this group hasn’t devised a complex, superpowered algorithm to steal information.Instead, it’s allegedly stealing information the (sort of) old fashioned way: through social engineering.The Reuters story explains that FIN4 “used fake MicrosoftOutlook login pages to trick attorneys, executives and consultants into surrendering their user names and passwords.”In at least one case, “the hackers used a confidential document, containing significant information that they had already procured, to entice people discussing that matter into giving their email credentials.”

I have two main thoughts. First, sound information handling practices, and appropriate wariness among professionals using email, still go a long way toward securing confidential data within organizations. It’s often not the most technologically advanced tactics that yield the worst data breaches. Second, FIN4 has embarked on a complex money-making plan. There may be many uses of this information, but one of them seems to be trading securities in the public markets. That’s not as simple as it seems. If you’re doing that, you’re on the grid and can’t really hide. FINRA sees all of those trades and it isn’t that hard for regulators to find out who is making them. When the Consolidated Audit Trail comes online,* it will be substantially easier and faster. In the meantime, broker-dealers are obligated to identify who their customers are. If those people have electronic connections to the ones involved in the hacking, those links could be enough for the SEC to get an asset freeze before profits are siphoned overseas.

What FIN4 is allegedly doing is scary, but they haven’t yet built a criminal ATM.

* Speaking of the Consolidated Audit Trail, when is that thing coming online?

If you thought all the action in privacy regulation centered around the Federal Trade Commission, the Federal Communications Commission would like you to think again. Yesterday, April 28, the FCC held a 3-plus hour workshop that started the regulatory “conversation” on the manner in which the FCC can or should regulate consumer broadband privacy.

Chairman Wheeler kicked off the event with opening remarks that included this unequivocal statement: “Privacy is unassailable.” He also said that “changes in technology do not affect our values.” From these words and the text of the FCC’s “Open Internet” order released earlier this year, not to mention the FCC’s recent $25 million data breach consent decree with AT&T, it is clear the FCC intends to be involved in regulating consumer privacy.

Yesterday’s workshop follows the recent Open Internet order in which the agency determined it would apply certain aspects of its Title II authority to the Internet (namely, certain “common carrier” provisions of the Communications Act). The order has broad impact on issues like consumer access to broadband content that have been widely written about. But what the order means for privacy is that the FCC’s rules on “customer proprietary network information” or CPNI, which have historically applied to traditional telephone companies and interconnected VoIP providers, may apply more broadly in some form or fashion to others in the broadband ecosystem—particularly, broadband Internet access providers.

By way of background, CPNI (defined in Section 222(h)(1) of the Communications Act) is information collected by telecommunications carriers about their customers. CPNI includes things like “quantity, technical configuration, type, destination, location, and amount of use of a telecommunications service” that a customer subscribes to and billing information. It is a fairly specific definition, and it doesn’t include personal information like name, phone number, address, etc.

Section 222(a) of the Communications Act requires telecommunications carriers to protect the confidentiality of customer information, and Section 222(c) restricts the ability of telecommunication carriers to use, disclose, or permit access to individually identifiable CPNI without the customer’s approval or as required by law.

The Open Internet order makes plain that the FCC intends to apply these CPNI confidentiality provisions (or some form of them) to broadband and broadband Internet access providers. But the FCC will need to adopt new rules to apply the CPNI provisions of the statute in this way.

Yesterday’s workshop started that process with a discussion among stakeholders, though no formal rulemaking has been launched. Panelists discussed the privacy implications of broadband Internet access (for example, the kind of data broadband providers have access to) as well as specific concerns with applying Section 222 to broadband Internet access services. One common theme was the potentially overlapping jurisdiction of the FTC and the FCC in the area of privacy. But, significantly, one thing the FCC brings to the table in this area is general rulemaking authority, which the FTC lacks.

We’ll have to watch and wait to see if a notice of inquiry, notice of proposed rulemaking, or other agency guidance will come.

The FCC typically uploads events like this to its archive, so check here or here in a few days if you would like to view the full event.

Over the past months, my experiences with physician practices have made me realize that many practices do not understand how HIPAA applies to subpoenas for medical records. More worrisome, I suspect that many practices nationwide routinely violate HIPAA when they receive a subpoena.

Here’s what I’ve observed: Practices receive state court subpoenas that are signed by lawyers and that demand the production of medical records, and the practices automatically assume they must produce the records. This is a dangerous assumption—the production of the records may very well violate HIPAA.

Here’s what HIPAA requires in general terms: If a practice receives a state court subpoena for medical records that is signed by a lawyer, the practice should not produce the records unless (1) the practice also receives (a) a court order requiring production, (b) a HIPAA “qualified protective order” that’s been entered in the lawsuit, (c) a HIPAA compliant authorization from the patient that authorizes the disclosure demanded by the subpoena, or (d) certain other matters designated by HIPAA’s rule concerning subpoenas, or (2) the practice takes certain additional actions required by HIPAA’s rule for subpoenas.

If a practice receives such a subpoena without receiving any of these “additional” items or taking these “additional” actions, the practice will likely violate HIPAA if the records are produced.

Here’s what practices should do. Because this area of HIPAA is somewhat complex and difficult for practices to navigate on their own, practices should consult with legal counsel when they receive such a subpoena. Legal counsel can advise whether HIPAA permits the disclosure, whether the practice needs to object to the subpoena, and whether other actions should be taken. On numerous occasions, we have reviewed such subpoenas, determined that they did not comply with HIPAA, and sent a letter objecting to the subpoena, and the practice never heard from the parties again.

Take away: If you receive a state court subpoena signed by a lawyer demanding the production of medical records, do not automatically produce the medical records.

“BYOD” or “bring your own device” (also known as the “consumerization of IT”) is a fact of life in today’s workplace. BYOD refers to the practice of using personally owned devices—like smartphones, tablets, and laptops—for work purposes and allowing these devices to connect to company networks and applications. According to a Gartner study released in late 2014, 40% of U.S. employees working for large companies use personally owned devices for work purposes. Of those who reported using personally owned devices, only 25% were required by their employers to do so. And of the 75% who chose to use their personally owned devices for work, half did so without their employers’ knowledge.

If that last statistic doesn’t alarm you a little, it should.

BYOD can be great for productivity, but it also creates risk for companies. Why? Because personally owned devices are not managed or controlled by company IT, and security of the device is in the hands of the employee. In other words, these devices can be a source of company data loss or data leakage. For example, if an unencrypted personal laptop with company data on it gets lost or stolen, you may have a data breach on your hands.

So how do you address these concerns? Start with a written, employee-acknowledged BYOD policy.

Here are five things to consider as you develop your policy:

1.Start by building an interdisciplinary team to create the policy. The team should include IT, human resources, legal, compliance, and employees who use their personally owned devices. BYOD is not just an IT issue but also a legal and business issue. Different perspectives will help lead to a policy that fits your organization’s needs and is capable of being followed.

2.Develop the goals of the policy. Security should be a goal but productivity is also important. Cost savings may also be an objective. These, and other goals, may be in tension at times. In the end, you want to develop a policy that strikes the right balance for your company. Consider a BYOD policy that ensures the only way enterprise data flows into or out from a device is through enterprise systems (e.g., avoid emailing business information to employee personal email accounts).

3.Determine which employees are covered and how they are covered. There may be different policies for different types of employees based on job type or function. For example, the policy for exempt employees may be different than the one for non-exempt employees. Consider whether all employees need to be able to use tablets (for example) to access corporate data for work purposes. Be sure to consult with counsel about employment laws and regulations that may apply in this area.

4.Decide which devices/uses are permitted and how the policy applies to each. For example, the BYOD team should conduct “app reviews” to decide what apps are okay for business use and what apps are not allowed. Particular types of smartphones, tablets, or laptops may not meet the company’s security requirements and, if not, should not be permitted. Also, the policy for smartphones may be different than the policy for laptops because they are different devices used in different ways and may pose different security risks.

5.Build in training so that each employee knows the policy and how it applies to him or her. In the end, security is about people just as much as it is about technology.

This isn’t an exhaustive list of considerations, but it will help get you started crafting a BYOD policy tailored to your company. It’s important that you do so, if you haven’t already.

Suffering a data breach is bad enough. As often as it appears to happen, companies that are affected by a breach still shoulder a considerable burden. Management must stop the trains to identify the cause and scope of the breach—and then prepare for the aftermath. Lawyers are involved. The company’s brand is at risk. And the costs—employee time, legal fees, security consultants—quickly escalate.

But what if you determine that your company didn’t really need the information that was exposed? Suppose you find out that the breach involved a file that contained drivers’ license numbers or even credit card information, but your company had virtually no administrative need for that information? Or suppose the data pertained to transactions by young adults 7 years ago – and it is highly unlikely that any of the information is still relevant (much less accurate). This is the kind of discovery that lends insult to injury. Your company is forced to stop and invest significant funds to respond to a data breach relating to data that you don’t have any use for anymore for its marketing or operational efforts.

The surest way to avoid this problem is to review and assess the way you currently collect, retain, and store information. Here are a few items to consider:

·Collection – Do you really need all of the personal information that you are collecting from consumers? Review your intake procedure and revise it to collect only what you need for operational or marketing purposes. Also, are you even aware of all of the different portals through which your company may be collecting data from consumers? Be sure you’ve done that so that you can assure that you are doing a full assessment. Do you have someone in your organization responsible for tracking the types of data you are collecting and the different processes through which you are collecting the data?

·Retention – How long are you storing personal information? And for what purposes? Are your practices consistent with PCI standards? What is your current retention policy and are you following it? There are federal and state laws that may govern the retention, disposal or destruction of your data. Be familiar with those laws. Within the confines of applicable laws, be sure you are not holding on to unnecessary or outdated data that would cause you intolerable frustration in the event it was breached. Do you have someone in your organization responsible for overseeing retention and disposal?

·Third Party Partners and Vendors – If you are sharing personal information with other parties (which, of course, needs to be disclosed to consumers in your privacy policy), be sure that your agreements with those parties contain appropriate safeguards. Are you requiring your vendors to secure personal information and prohibit the disclosure of that information? What happens in the event of the breach? Who bears the cost of notification? Are you vendors required to indemnify you if their mistakes lead to actions against your organization?

There is a simple rule that applies in a data breach: You are what you keep. So be careful with what information you currently collect and retain. Talk to your lawyer about whether certain information that you may consider to be “stale” may be properly and legally disposed. And, more importantly, consider revising your practices going forward so you don’t continue to collect or retain any stale or unnecessary information going forward.

We’re behind on this, but better (a little bit) late than never. Last month the SEC’s Office of Compliance, Inspections and Examinations released the first results of its Cybersecurity Examination Initiative, announced in April 2014 (and discussed here). As part of the initiative, OCIE staff examined 57 broker-dealers and 49 investment advisers to better understand how these entities “address the legal, regulatory, and compliance issues associated with cybersecurity.”

What the Exams Looked For

In the exams, the staff collected and analyzed information from the selected firms relating to their practices for: ♦ identifying risks related to cybersecurity; ♦ establishing cybersecurity governance, including policies, procedures, and oversight processes; ♦ protecting firm networks and information; ♦ identifying and addressing risks associated with remote access to client information and fund transfer requests; ♦ identifying and addressing risks associated with vendors and other third parties; and ♦ detecting other unauthorized activity.

Importantly, the report is based on information as it existed in 2013 and through April 2014, so it’s already somewhat out of date.

The Good News

The report includes some good news about how seriously the SEC’s registered entities are taking cybersecurity.

·Many firms are utilizing external standards and other resources to model their information security architecture and processes. These include standards published by National Institute of Standards and Technology (“NIST”), the International Organization for Standardization (“ISO”), and the Federal Financial Institutions Examination Council (“FFIEC”).

Encouraging! But the report didn’t bring all good tidings.

The Bad News

Here are some of the less auspicious facts:

·88% of the broker-dealers and 74% of the advisers reported being the subject of a cyber-related incident.

·Most of the broker-dealers (88%) require risk assessments of their vendors, but only 32% of the investment advisers do.

·Related to that, most of the broker-dealers incorporate requirements relating to cybersecurity risk into their contracts with vendors and business partners (72%), but only 24% of the advisers incorporate such requirements. Fewer of each maintain policies and procedures related to information security training for vendors and business partners authorized to access their networks.

·A slight majority of the broker-dealers maintain insurance for cybersecurity incidents, and only 21% of the investment advisers do.

The Rest

Almost two-thirds of the broker-dealers (65%) that received fraudulent emails seeking to transfer funds filed a Suspicious Activity Report with FinCEN, as they’re likely required to do. The report then notes that only 7% of those firms reported the incidents to other regulators or law enforcement. It’s curious to me why the SEC would expect other reports to happen. With the SAR obligations in place, those firms probably, and reasonably, think all the necessary reporting has been done after the SAR has been filed. Also, these firms’ written policies and procedures generally don’t address whether they are responsible for client losses associated with cyber incidents. Along these lines, it might be that requiring multi-factor authentication for clients and customers to access accounts could go a long way toward pushing responsibility for those losses on the users.

But don’t take my word for it. Read the report yourself, linked above and here.

Last week, we posted about the Consumer Privacy Bill of Rights “discussion draft” released by the Obama Administration. On Thursday, March 5, at the annual U.S. meeting of the International Association of Privacy Professionals (which I attended), FTC Commissioner Julie Brill answered questions about her take on the bill and other policy issues. Here are just a few comments from that discussion that merit a follow-up post:

Commissioner Brill stated in no uncertain terms that the draft bill is not protective enough of consumers. At various times, she said there are “serious weaknesses in the draft,” “there’s no there there,” it needs “some meat,” and “where are the boundaries?” She mentioned a specific example of a more consumer-protective approach relating to consent to certain data practices. She indicated she would like to see the bill require affirmative express consent of the individual for (a) material retroactive changes to a privacy statement and (b) use of sensitive information out of context of the transaction in which it was collected.

Although Section 402 of the draft bill provides that the FTC’s unfair and deceptive authority under Section 5 of the FTC Act remains intact, Commissioner Brill expressed concern that it is not clear enough in the bill that the FTC would retain its full authority to enforce the “common law” of privacy as developed in its prior enforcement actions.

Will anything come of the bill this Congress? Commissioner Brill said she expects it’s unlikely given other legislative priorities at this time.

Commissioner Brill commended the Administration for grappling with tough issues and working to improve privacy overall. However, hers is one more voice calling for additional work on broad federal consumer privacy legislation.

Late last week, President Obama released a “discussion draft” of the Administration’s long awaited Consumer Privacy Bill of Rights Act. At first blush, the results are a mixed bag: some good, some not so good, much work among stakeholders left to be done.

It didn’t take long for consumer advocates, and even one FTC Commissioner, to say the draft legislation doesn’t go far enough. The Internet has been rife with posts this week about the bill’s problems and shortcomings. In summary, for most, the bill landed like a lead balloon.

Still, the Administration released the bill as a “discussion draft”—signaling the draft legislation is a just a step and an invitation for further conversation. For a measured perspective considering the bill through this lens, read former Obama Administration official Nicole Wong’s thoughtful article.

While it’s certainly far from perfect, my take is that the bill isn’t all bad. Here are just a few initial pros and cons to the bill that I’ve identified (in no particular order):

Pro: many principles are based on fair information practices familiar from existing federal statutes, flexibility and consideration of measures that are reasonable in context, availability of safe harbor protections, exceptions for de-identified data, delayed enforcement to allow parties time to adjust to the law’s requirements.

One item of note is that the security provisions in Section 105 (a) codify, at a very high level of generality, some of the principles that we’ve been advising our clients about: for example, taking steps to identify internal and external risks to privacy and security of personal data and implementing and regularly assessing safeguards to control risks. (Of course, it’s a separate thing all together to have recommendations take on the force of law.)

In the end, it may have been inevitable that this bill would be a disappointment to some. After all, the public has been waiting on it since 2012. During that time, there have been many, many high-profile breaches of consumer information. The appetite for more privacy and security protections has only grown over time. But it will take a delicate balance to provide desired protections while at the same time making legal requirements workable for both consumers and the businesses offering products and services consumers want.

To be sure, there will be more to come from the Consumer Privacy Bill of Rights—stay tuned.

When I was at the SEC and online broker-dealers’ customers were the victims of hacking incidents, I used to wonder, why don’t the broker-dealers require multi-factor authentication to gain access to accounts? It was a silly question. I knew the answer. Multi-factor authentication is a pain and nobody likes it.

Multi-factor authentication (MFA) is a method of computer access control which a user can pass by successfully presenting authentication factors from at least two of the three categories:

·knowledge factors (“things only the user knows”), such as passwords

·possession factors (“things only the user has”), such as ATM cards

·inherence factors (“things only the user is”), such as biometrics.

The idea is, hackers might figure out your password, but they won’t be able to figure out a number that changes every 30 seconds on a card you carry or on your cell phone. They won’t be able to replicate your fingerprint. That’s the idea, anyway. Brokers and banks have been loathe to require multi-factor authentication because it’s inconvenient and customers often hate it.

But here comes Ben Lawsky, the Superintendent of New York’s Department of Financial Services, who just unveiled a number of proposals to increase cybersecurity at banks under his jurisdiction. One of these is to require that banks use multi-factor authentication. This move could take a lot of the economic pressure off banks that would otherwise like to implement this control for its customers, but have been unwilling to do so for fear of losing those customers to rivals. If everybody has to do it, there’s not a lot of fear from imposing it unilaterally.

That’s not all Lawsky has in mind. His proposal also includes:

·requiring senior bank executives to personally attest to the adequacy of their systems guarding against money laundering;

·ensuring that banks receive warranties from third-party vendors that those providers have cybersecurity protections in place;