An online forum of the ABA Section of Antitrust Law's Privacy and Information Security Committee

Category Archives: Identity Theft

The 2014 Verizon Data Breach Investigations Report (DBIR) was released on April 22, providing just the sort of deep empirical analysis of cybersecurity incidents we’ve come to expect from this annual report. The primary messages of this year’s DBIR are the targeting of web applications, continued weaknesses in payment systems, and nine categories of attack patterns that cover almost all recorded incidents. Further, despite the attention paid to last year’s enormous data breach at Target, this year’s data shows that attacks against point of sale (POS) systems are actually decreasing somewhat. Perhaps most importantly, the underlying thread that is found throughout this year’s DBIR is the need for increased education and application of digital hygiene.

Each year’s DBIR is compiled based on data from breaches and incidents investigated by Verizon, law enforcement organizations, and other private sector contributors. This year, Verizon condensed their analysis to nine attack patterns common to all observed breaches. Within each of these patterns, Verizon cites the software and vectors attackers are exploiting, as well as other important statistics such as time to discovery and remediation. The nine attack patterns listed in the DBIR are POS intrusions, web application attacks, insider misuse, physical theft/loss, miscellaneous errors, crimeware, card skimmers, denial-of-service (DoS) attacks, and cyber-espionage. Within industry verticals, most attacks can be characterized by only three of the nine categories.

Attacks on web applications attacks were by far the most common threat type observed last year, with 35% of all confirmed incidents linked to web application security problems. These numbers represents a significant increase over the three-year average of 21% of data breaches from web application attacks. The DBIR states that nearly two thirds of attackers targeting web applications are motivated by ideology, while financial incentives drive another third. Attacks for financial reasons are most likely to target organizations from the financial and retail industries. These attacks tend to focus on user interfaces like those at online banking or payment sites, either by exploiting some underlying weakness in the application itself or by using stolen user credentials. To mitigate the use of stolen credentials, the DBIR advised companies to consider implementing some form of two-factor authentication, a recommendation that is made to combat several attack types in this year’s report.

The 2014 DBIR contains a wide array of detailed advice for companies who wish to do a better job of mitigating these threats. The bulk of this advice can be condensed into the following categories:

Be vigilant: Organizations often only find out about security breaches when they get a call from the police or a customer. Log files and change management systems can give you early warning.

Make your people your first line of defense: Teach staff about the importance of security, how to spot the signs of an attack, and what to do when they see something suspicious.

Keep data on a ‘need to know basis’: Limit access to the systems staff need to do their jobs. And make sure that you have processes in place to revoke access when people change role or leave.

Patch promptly: Attackers often gain access using the simplest attack methods, ones that you could guard against simply with a well-configured IT environment and up-to-date anti-virus.

Encrypt sensitive data: Then if data is lost or stolen, it’s much harder for a criminal to use.

Use two-factor authentication: This won’t reduce the risk of passwords being stolen, but it can limit the damage that can be done with lost or stolen credentials.

Don’t forget physical security. Not all data thefts happen online. Criminals will tamper with computers or payment terminals or steal boxes of printouts.

These recommendations are further broken down by industry in the DBIR, but they largely come down to a liberal application of “elbow grease” on the part of companies and organizations. Executing on cyber security plans requires diligence and a determination to keep abreast of continual changes to the threat landscape, and often requires a shift in culture within a company. But with the FTC taking a more aggressive interest in data breaches, not to mention the possibility of civil suits as a response to less-than-adequate data security measures, companies and organizations would do well to make cyber security a top priority from the C-Suite on down.

In recent weeks, news and analysis of the data breach announced by Adobe in early October has revealed the problem to be possibly much worse than early reports had estimated. When Adobe first detected the breach, its investigations revealed that “certain information relating to 2.9 million Adobe customers, including customer names, encrypted credit or debit card numbers, expiration dates, and other information relating to customer orders” had been stolen through a series of sophisticated attacks on Adobe’s networks. Adobe immediately began an internal review and notified customers of steps they could take to protect their data. Security researchers have since discovered, however, that more than 150 million user accounts may have been compromised in this breach. While I make no assertions regarding any potential claims related to this breach, I believe the facts of this incident can help convey the difficulties inherent in the ongoing debate over liability in cybersecurity incidents.

The question of whether software companies should be held liable for damages due to incidents involving security vulnerabilities or software bugs has been kickedaround by scholars and commentators since the 1980s—centuries ago in Internet time—with no real resolution to show for it. Over the past month, Jane Chong has written a series of articles for the New Republic which revives the debate, and argues that software vendors who do not take adequate precautions to limit defects in their code should bear a greater share of the liability burden when these defects result in actual damages. This argument may seem reasonable on its face, but a particular aspect of the recent Adobe data breach illustrates some of the complexities found in the details that should be considered a crucial part of this debate. Namely, how do we define “adequate” or “reasonable” when it comes to writing secure software?

As Adobe correctly pointed out in their initial announcement, the password data stolen during the data breach was encrypted. For most non-programmers, this would appear to be a reasonable measure to protect sensitive customer data. The catch here lies in two core tenets of information security: First, cryptography and information security are not the same thing, and second, securing software of any complexity is not easy.

When Adobe encrypted their customer passwords, they used a well-known encryption algorithm called Triple DES (3DES) in what is called ECB mode. The potential problem is not in the encryption algorithm, however, but in its application. Information security researchers have strongly discouraged the use of cryptographic algorithms like 3DES—especially in the mode Adobe implemented—for encrypting stored passwords, since it uses a single encryption key. Once a hacker cracks the key, all of the passwords become readable. In addition, since 3DES in ECB mode will always give the same encrypted text when using the same plain text, this enables hackers to use guessing techniques to uncover certain passwords. These techniques are made easier by users who use easily guessed passwords like “123456” (used by two million Adobe customers). When you consider that many Adobe customers use the same password for multiple different logins, which may include banks, health care organizations, or other accounts where sensitive information may be accessed, one can see the value of this Adobe customer data to hackers.

From an Adobe customer’s perspective, it may seem reasonable that Adobe bear some of the liability for any damages that might result from this incident. After all, the customer might reason, Adobe’s network was breached, so Adobe did not do enough to protect customer data. On the other hand, Adobe could justifiably point out that it had taken reasonable precautions to protect their networks, including encrypting the sensitive data, and it was only due to a particularly sophisticated attack that the data was stolen. Further, Adobe could argue, if a customer used an easily guessed password for multiple logins, there is nothing Adobe can do to prevent this behavior—how could it be expected to be liable for digital carelessness on the part of its customers?

These questions will not be answered in a few paragraphs here, of course, but it is clear that any discussion of software liability is not necessarily analogous to product liability theories in other industries, like airlines or cars. Rather, software engineering has its own unique considerations, and we should be careful not to slip too easily into convenient metaphors when considering questions of software liability. Secure software development can be difficult; we should expect no less for questions of law related to this industry.

On January 6, 2012, a Massachusetts District Court, in Tyler v. Michael Stores, Inc., held that zip code information is personal identifiable information (“PII”) under a state consumer protection statute. In Tyler, the plaintiff provided her zip code to a cashier at Michaels’ arts and crafts store while making a purchase with her credit card. According to the plaintiff, Michaels then combined her zip code with other information to obtain her home mailing address, and began sending unwanted marketing materials. The plaintiff argued that the collection and recording of zip codes during a credit card transaction violates Mass. Gen. Laws ch. 93 § 105, under which a business cannot “write, cause to be written or require that a credit card holder write [PII], not required by the credit card issuer, on the credit card transaction form.”

In its order, the Court dismissed the case because the plaintiff was unable to show cognizable injury. Nevertheless, the Court held that zip codes are PII because such information is consistent with language in a Massachusetts criminal identity theft statute that defines PII as any “number” used “alone or in conjunction with any other information” to assume the identify of an individual. Moreover, despite Michaels’ argument that the state statute applies only to credit card information recorded on paper, the Court stated that the statute applies to all credit card transactions, including those processed manually, electronically, or by other methods.

Businesses that collect customer information at the sales register should continue to closely follow this issue as this case, as well as the recent California Supreme Court decision in Pineda v. Williams-Sonoma Stores, Inc., may foretell lawsuits in other states with consumer protection statutes that are similar to those in Massachusetts and California.

The Massachusetts Attorney General recently announced a $7,500 settlement with Belmont Savings Bank following a data breach in which an unencrypted backup computer tape was lost after an employee failed to follow the bank’s policies and procedures. This tape contained the names, Social Security numbers, and account numbers of more than 13,000 Massachusetts residents.

The tape was lost in May 2011, when an employee left it on a desk rather than storing it in a vault for the night. Surveillance footage showed that the tape was then thrown away by the cleaning crew. The tape was most likely incinerated by the bank’s waste disposal company, and the bank has indicated that it has no evidence that the Massachusetts residents’ personal information had been acquired or used by an unauthorized person.

In addition to the $7,500 penalty, the settlement requires Belmont Savings Bank to mitigate the risk of future data breaches by:

Rhode Island State Senators Ruggerio, Perry, Nesselbush, Jabour, and Ottiano introduced a bill last February the Consumer Empowerment and Identity Theft Prevention Act. It has been referred to the Rhode Island Senate Committee on Corporations.

The bill would prevent retailers to record a credit card number or all or part of a social security number as a way to identify the customer paying for a purchase by check. Violations would be punished by a fine of not more than one hundred dollars. The bill would also require that, unless required by federal law, no one may require a customer to disclose all or part of a social security number incident to a sale of consumer goods or services. Violating this provision of the act would be a misdemeanor, punishable by a fine of not more than five hundred dollars. However, insurance and financial services companies would still be authorized to require service applicants’ social security numbers. Companies providing and billing health care or pharmaceutical-related services could also still require users’ social security numbers, and a consumer applying for a credit card may still be required to disclose his social security number.

The bill would also bar any person or business offering discount cards for purchases to require a consumer to disclose all or part of her social security number as a condition to apply for the discount card. Also, no information obtained during the discount card application process could “be sold or given to any other person, firm, corporation or business entity provided, that the person, firm, corporation or other business may: (a) disclose such information to its affiliates, to service providers that perform services for it, or as required by law; and/or (b) transfer such information in connection with the sale of its business operations.”

A similar bill has been introduced by Representative Brian Patrick Kennedy, and is currently before the Rhode Island House Committee on Corporations.

On January 25, 2011, the 112th Congress introduced its first data security-related bill—the Cybersecurity and American Cyber Competitiveness Act (S. 21). The bill is co-sponsored by Senate Majority Leader Harry Reid and several Senate Committee leaders, including Senators Leahy, Levin, Bingaman, Kerry, Rockefeller, Lieberman, and Feinstein. The bill seeks to safeguard critical technology infrastructure from cyber attacks and protect individual privacy by improving identity theft prevention measures, guarding against personal information abuse, and seeking to promote international cooperation to combat cyber threats. More information regarding S. 21 is available in a statement released by the bill’s co-sponsors.

In the last two weeks of 2010, President Obama signed the following three acts addressing privacy:

Red Flags Program Clarification Act of 2010

President Obama signed the “Red Flag Program Clarification Act of 2010,” S. 2987, (“Clarification Act”) on December 18, 2010, which became Public Law No: 111-319.The Clarification Act narrows the definition “creditor” under the Fair Credit Reporting Act (FCRA) by adding a definition to Section 615(e), 15 U.S.C. § 1681m(e), to address issues with the breadth of the Federal Trade Commission’s Identity Theft Red Flags Rule (“Red Flag Rule”).

The FTC’s Red Flag Rule was promulgated pursuant to the Fair and Accurate Credit Transactions Act, under which the FTC and other agencies were directed to draft regulations requiring “creditors” and “financial institutions” with “covered accounts” to implement written identify theft prevention programs to identify, detect and respond to patterns, practices or specific activities—the so called “red flags”—that could indicate identify theft.The FTC interpreted the definition of “creditor” to include entities that regularly permit deferred payment for goods and services, which included lawyers, doctors, and other service providers not typically considered to be “creditors.”This interpretation led to lawsuits by professional organizations, including the American Bar Association, the American Medical Association, and the American Institute of Certified Public Accountants, challenging the FTC’s position that the Red Flags Rule should apply to its members.

The Clarification Act limits the definition of creditor to entities that regularly and in the ordinary course of business: (i) obtain or use consumer credit reports, (ii) furnish information to consumer reporting agencies, or (ii) advance funds to or on behalf of a person.The definition of creditor specifically excludes creditors that “advance funds on behalf of a person for expenses incidental to a service provided by the creditor to that person.”However, the Clarification Act also allows the definition of creditor to be expanded by rules promulgated by the FTC or other regulating agencies to include creditors which offer or maintain accounts determined to be subject to a reasonably foreseeable risk of identity theft.

S. 2987 was introduced and by Senator John Thune (R-S.D.) and co-sponsored by Mark Begich (D-Alaska) on November 30, 2010, and the Senate unanimously approved the bill the same day.An identical companion bill was introduced in the House, H.R. 6420, by Representatives John Alder (D-N.J.), Paul Broun (R-Georgia), and Michael Simpson (R-Idaho) on November 17, 2010.S. 2987 passed the House on December 7, 2010.

The FTC had previously delayed enforcement of the Red Flags Rule several times, most recently in May 2010 when it delayed enforcement through December 31, 2010.The FTC’s Red Flags Rule website, http://www.ftc.gov/redflagsrule, notes that the FTC will be revising its Red Flags guidance to reflect the Clarification Act changes.

Social Security Number Protection Act of 2010

President Obama also signed the “Social Security Number Protection Act of 2010,” S. 3789, on December 18, 2010, which became Public Law No: 111-318.S. 3789 was introduced by Senator Dianne Feinstein (D-Cali.) and co-sponsored with bipartisan support, including Senator JuddGregg (R-N.H.).The Act aims to reduce identity theft by limiting access to Social Security numbers, according to a statement from Senator Feinstein.

The Act prohibits any federal, state, or local agency from displaying Social Security numbers, or any derivatives of such numbers, on government checks issued after December 18, 2013. The Act also prohibits any federal, state or local entity agency from employing prisoners in jobs that would allow access to Social Security numbers after December 18, 2011.

S. 3789 unanimously passed in the Senate on September 28, 2010, and passed in the House by voice vote under suspension of its rules on December 8, 2010.

Truth in Caller ID Act of 2009

On December 22, 2010, President Obama signed into law the “Truth in Caller ID Act,” S. 30, which became Public Law No: 111-331.The Caller ID Act is intended to combat the problem of caller ID “spoofing” where identity thieves alter the name and number appearing as caller ID information in an attempt to trick people into revealing personal information over the phone.

The Caller ID Act amended Section 227 of the Communications Act of 1934, 47 U.S.C. § 227, to make it illegal to knowingly transmit misleading or inaccurate caller identification information with the intent to defraud or cause harm.However, the Caller ID Act specifically prohibits anything in it from being construed as preventing or restricting any person from using caller ID blocking.

The Federal Communications Commission (“FCC”) is required to prescribe regulations to implement the Act within six months.The Caller ID Act specifically exempts law enforcement activity and caller ID manipulation authorized by court order, and it also allows the FCC to define other exemptions by regulation.

The FCC can impose civil forfeiture penalties of up to $10,000 per violation, or $30,000 for each day of continuing violation, up to a cap of $1,000,000 for any single act or failure to act.Willful and knowing violations of the Caller ID Act can result in criminal penalties including the same monetary penalties and up to a year in prison.

S. 30 was introduced by Senator Bill Nelson (D-Fla.) on January 7, 2009, and passed in the Senate on February 23, 2010.The bill was approved in the House on December 15, 2010 by voice vote under suspension of its rules. S. 30 was very similar to H.R. 1258 introduced by Representatives Eliot Engel (D-N.Y.) and Joe Barton (R-Tex.) and passed by the House on April 14, 2010, according to a statement released by Representative Engle.