August 2010 - Posts

A laptop belonging to Yorkshire Building Society shows classic signs of terrible approaches to information security: lack of disk encryption software, passwords being written down and kept with the laptop, and more data than necessary stored in the computer. In the end, the computer was retrieved, but this is more luck than anything.

Oh, my. Where to Begin

One thing to note before we begin. Yorkshire Building Society (YBS) is a recent merger of two companies: Yorkshire Building Society and Chelsea Building Society (building societies are financial institutions that focus on mortgages). It seems that the problems stemmed from the Chelsea side, since the stolen laptop belonged to them.

Furthermore, the Yorkshire already has a policy of encrypting all laptops, something that the Chelsea operations did not have in place, apparently.

The gist of the story: an employee with Chelsea returns his laptop, which he was using at home and chock full of customer data, because a manager asks him to. The manager, with Chelsea, writes down the password to the laptop and places it into the laptop case. The case--along with the laptop and the password--is stolen! Computer is recovered 48 hours later--a miracle--and there are signs that there was failed attempted access to the contents of the laptop--a second miracle.

Writing the password down? Possibly bad. Placing it in the same case as the laptop? Definitely bad. Not having encryption on a laptop with sensitive information? Grounds for getting fined.

Getting Lucky

Yorkshire must have done something good, because they've been blessed with plenty of luck.

They were able to recover their laptop. This almost never happens.

Whoever stole the laptop was unable to access to it. Not that he didn't try. How did he not see the password?

The ICO after looking into the matter, didn't fine them. The ICO has the power to fine up to £500,000 for a data breach. So far, it hasn't used that power.

Of course, for #3 above, it would be unfair to fine YBS. After all, the poor data security practices stem from the former Chelsea Building Society. Think about it.

Chelsea employee uses a laptop at home, one not making use of encryption software like AlertBoot, which happens to be full of customer data, most of which he doesn't need. Laptop is not encrypted by Chelsea. The laptop is given to a manager at Chelsea who writes down the password (password-protection in place, I guess) and places it with the laptop. Obviously, Chelsea forgot to educate their employees about correct data security practices.

It would be terrible to fine the new company because of the actions of an old company. I know it happens, but it's a terrible approach, especially if the new company is trying to do right: If I may go out on a limb, I'm betting that the laptop was "recalled" in order to have it encrypted and then returned back to the employee.

The correct approach to laptop data security would have been to have it encrypted prior to saving sensitive data to it. Once the device is out in the field, it becomes somewhat of a challenge to encrypt it safely, since--as seen in the above case--you could have a breach because you decide to encrypt it.

For example, the IT department has to get involved, so you send in the laptop and that's when the unforeseen happens: cases get lost, forgotten, stolen; someone breaks into your car; you get mugged on your way to the office; etc. On the other hand, not encrypting your laptop is not the answer.

How AlertBoot Can Help

AlertBoot is a web-based encryption software suite. As long as the computer is connected to the internet, the user of the laptop can easily encrypt their laptops, since it's not unlike installing any other type of software, including games. In other words, the laptop doesn't have to be sent it.

The administrator is needed at a higher level: customizing the central management console, also in the cloud and available anywhere with an internet connection. It also allows him or her to administer machines as necessary, update policies, and run reports to perform audits on the encryption status of computers.

George Hulme over at informationweek.com points towards a report by HITRUST where it's noted that the total cost of HIPAA/HITECH publicized breaches could be a little over $800 million. What's most surprising to me is how much of a positive effect drive encryption software like AlertBoot could have on the numbers.

Numbers Underreported?

As many following this blog know, in September 2009 the HHS started publicizing breach violations involving 500 or more, as required under the HITECH amendments to HIPAA. Nearly a year afterward, there are over 108 entries involving 4 million people (or rather, medical records).

Of course, the cost of a breach is not reported to the HHS; it's not required and it's none of their business, really. But, people in the industry have enquiring minds, and enquiring minds want to know. So HITRUST applied the general cost of a data breach ($204/record per the latest Ponemon Report) and came up with an estimate of $834 million.

It's a good guesstimate. On the one hand, there is criticism that the Ponemon Institute's figures are aggressive. I don't quite share this belief, but I understand the critics: the institute appears to have ties with the information security industry (collaborating with software vendors, carrying out studies for a vendor, etc). Isn't that natural, though? Who's going to pay for these reports? Not the guys busy trying to cover up their breaches.

On the other hand, the above guesstimate is based on breaches involving 500 or more people. I'm pretty sure there are plenty of breaches involving less records, which should balance out any "aggressiveness" in the per record cost.

Nearly Two-Thirds Are Theft-Related

As impressive as hundreds of millions might be, perhaps we should turn our attention to what's not a guesstimate: what was involved in a breach.

Looking at the cross-section of these categories and focusing first on simply the number of breaches experienced, the theft of laptops was the number one cause resulting in a total of 32 breaches reported. The next closest leading causes are theft of desktop computers and theft of removable media resulting in 10 and 12 breaches respectively. The total number of thefts reported is an astonishing 68 or 63% of all breaches. [informationweek.com, my emphasis]

While I'm loathe to say the current sample at the HSS's site is representative of the "breach population" out there, it does show that that a big part of data breaches stem from the loss of digital devices with sensitive information such as PHI, protected health information.

Such breaches can be curtailed substantially, using the one tool that the Department of Health and Human Services has deemed worthy of providing safe harbor from breach notifications, if used: encryption software.

Whether it's a laptop computer, a desktop computer, an external hard drive, or even a USB memory stick, each one of these product categories can be protected with disk encryption.

Databreaches.net has a link to a FAQ created by Idaho Power, where it notes that employee information was breached when Mercer Health & Benefits lost a backup tape. The information on this tape was not protected with encryption, the same technology that powers AlertBoot endpoint security software, and it looks like the data breach has affected nearly 400,000 people.

One of these breached companies, Idaho Power, has put up a FAQ for their employees, and has revealed a couple of things that were not apparent before.

Hundreds of Thousands Affected, Not Sure About Breach Risk

First, it has announced how many people were affected in total. Previously, we only had a partial count: a thousand with this company, another couple of hundred with that company, etc. I already knew from firsthand experience that the numbers couldn't possibly be low. Mercer is a pretty big company, and backup tapes can hold a lot of data (and they usually do).

My own conservative, and unpublished, opinion was that it would affect people in the tens of thousands, at least. Idaho Power claims it's 5,000 of their own employees plus "375,000 other individuals." In other words, approximately 380,000 people were affected by this data breach.

Second, Idaho Power has made it a point to counter Mercer's claims that, despite the lack of data encryption on the tape, the information is probably safe. From Idaho Power's FAQ:

While the tape was not encrypted, Mercer indicates it is not the type of media that is readily accessible. Idaho Power disagrees and we are moving forward with our own independent investigation. You will be informed as the investigation progresses. [my emphasis]

This is the first time I've read where a company openly disagrees with a business associate, be it a partner, a subcontractor, etc. Usually, when a company experiences a data breach through no fault of their own, that company is busy hiding the third-party company's name. For example, when The Gap had a breach back in 2007, it wouldn't mention which company actually caused the breach. I only found out earlier this year when a court case was made public.

I'm not quite sure what to make of Idaho Power's position. Does it mean that Mercer's claim--that the lost information is safe--is incorrect? Does it mean that Mercer could be right, but Idaho Power wants to make sure? Perhaps Idaho is trying to mitigate any potential lawsuits?

I know this much for sure: all of this very well could have been avoided if the information on that backup tape had been protected with encryption.

The Financial Services Authority (FSA) in the UK has fined Zurich Insurance a whopping £2.28 million (US $3.5 million) for the loss of a backup tape in 2008. It was revealed that data encryption was not used to protect the contents of the tape.

46,000 Affected in the UK

The fine was for losing the information of 46,000 customers. The original story I had blogged about had noted that the personal details of 51,000 UK customers had been lost (over 640,000 people had been affected in all). I guess the initial report was slightly off. What is still accurate, though, is that the backup tape went missing en route to a storage center (for security purposes, I imagine).

This is the largest fine the FSA has handed out for a data breach to date. It would have been higher, but Zurich settled quickly, earning itself a 30% discount.

Oblivious to the Data Loss

The fine may have also have been for not having adequate controls that would have, if not prevented the loss to begin with, at least prompted Zurich to respond faster to what is a global problem: financial data theft. From property-casualty.com:

Margaret Cole, the FSA's director of enforcement and financial crime, said in a statement, "[ZIP UK] let its customers down badly. It failed to oversee the outsourcing arrangement effectively and did not have full control over the data being processed by Zurich SA. To make matters worse, [ZIP UK] was oblivious to the data loss incident until a year later."

It was also quoted in the same article that "as there were no proper reporting lines in place [ZIP UK] did not learn of the incident until a year later." While I'm probably reading too much into it, it almost sounds as if the breach was discovered early on, but relaying the information was mired up in bureaucracy because no one knew who to report it to ("no proper reporting lines" speaks volumes to me. Maybe it's because I served in the military).

While a supposition, I guess my thought above could be true: Zurich UK has announced that a "dedicated information security officer" had been appointed since the incident, per the BBC. Waiting an entire year because of that? That's pretty screwed up. Geez, people! Get in touch with the CIO! It might not be the CIO's responsibility per se, but at least they tend to know a thing or two about data security.

Oh, brother. $3.5 million because of one tape that was not protected with encryption software. Well, not entirely because encryption was not used; the issues revolving around data security are pretty complex. On the other hand, using encryption is usually a sign that other data security procedures are also in place (monitoring, auditing, etc).

No need to say that an encryption tool like AlertBoot would have been extremely useful in this case. Not just as a tool for avoiding fines when a tape goes missing, but for actually protecting sensitive customer information.

I mean, let's be honest: we still don't know whether approximately half a million people in the world will find themselves victims of ID theft. ID-theft based crimes either tends to happen very soon after a breach, or a long time after a breach. And why not? Some of that data is pretty much permanent. For example, a Social Security number is pretty much for life.

The Royal Wolverhampton Hospitals NHS Trust in the UK has agreed to sign an Undertaking after it was found of a data breach by the Information Commissioner's Office. The breach was caused by an itinerant CD that wasn't secured with neither data encryption software or password-protection.

Unable to Ascertain Why CD was Created

The CD was found at a bus stop near the hospital. It contained scans of patient records from the Intensive Care Unit of New Cross Hospital’s Heart and Lung Unit. A total of 112 patient records were breached.

Investigators were unable to determine how the CD ended up at the bus stop, or why the patient information was burned to the CD to begin with. According to scmagazineuk.com, "it was established that there were areas of weakness in the Trust's data protection procedures."

Well, that goes without saying...

A Number of Changes

In light of the breach and the weaknesses that were established during the investigation, the Trust has decided to effect some changes. Among them: "ensuring that patient charts released to consultants are signed for on receipt and chased for return after just one week."

It was only last week that I had mentioned Walsh Pharma's data security breach, where a DVD being returned by a third-party was lost in the mail. In light of this and other similar cases involving disks and the mail, I have some problems with the NHS Trust's new found vigor in upholding their data security practices.

Of course, if CD encryption is used, it wouldn't be much of a problem if there was some kind of snafu involving the mail. However, if encryption software like AlertBoot is not used--which it should be, by the way--then the information in the CD is ripe for a breach.

Better off to have the consultant destroy the CD than mail it back to the NHS unsecured. The consultant will have to provide proof of destruction, of course. Hm. Perhaps the CD should be mailed back already destroyed.

Scientists at Georgia Tech are recommending that passwords be no shorter than 12-characters long. I can think of at least one exception to this recommendation when it comes to laptop encryption software; however, it would behoove most to follow the recommendation, it seems.

Bruteforcing and GPUs

One of the most effective, yet inefficient, ways of cracking passwords is via bruteforcing, the practice of trying all possible passwords, including senseless words: In theory (and reality), if you try enough passwords, you've got to hit gold at some point.

However, as I've noted, it's inefficient: if you've got 6 billion possible matches and can only try one password every second, it's going to take at most 6 billion seconds (or 190 years). Computer automation can aid in lowering this number (such as, but not limited to, starting from both ends of the password list), as well as taking advantage of practical realities, such as the fact that people generally use an actual word as their password.

Using a dictionary word lowers potential password listings from the above 6 billion to an actual 250,000 words, which would take about 3 days to exhaust the list at the speed of 1 second per guess. (Incidentally, this is known as a dictionary attack, being a subset to a bruteforce attack.)

Modern graphics cards for computers sold out of retail shops can attempt more than one password per second. The team at Georgia Tech that recommends 12-character passwords based their calculations on crack attempts at 1 trillion password combos per second.

My 6 billion-long list would be finished in less than a second.

Strength in Numbers

So, why 12-character passwords? It's a matter of exponential growth: the English alphabet contains twenty six letters. If a password consists of one letter, the total possible number of passwords is 26. If two letters long, the total is 676 (26 x 26). For three letters, it's 17,576.

If 12-characters long, the total possible number of passwords is 9.5 x 10^16 (or 95 quadrillion, or if you prefer it drawn out--95,000,000,000,000).

If the password differentiates between upper and lowercase letters, then that 95 quadrillion figure increases: it doesn't double in size to 190 quadrillion but grows exponentially to 30 quintillion! Tack on numbers zero through nine, special characters, and international characters, and a 12-character password quickly becomes impossible to crack in a lifetime.

The findings by the Georgia Tech team found the following, per CNN:

The researchers used clusters of graphics cards to crack eight-character passwords in less than two hours.

But when the researchers applied that same processing power to 12-character passwords, they found it would take 17,134 years to make them snap.

Like I said at the beginning, I can think of one situation where a password doesn't need to be 12 characters long. When? you might ask.

If your computer's encryption software makes use of rate limiting, like AlertBoot does, where each incorrect guess introduces a delay in entering the password. During the first three tries, things work as they normally do. By the fourth incorrect try, however, you might specify that the password prompt not show up for 10 seconds.

The ability to process 1 trillion passwords a minute is of no use if you only get to enter one password every hour. At that rate, you'll still be entering passwords as the universe is collapsing.

Many on-line e-mail accounts already make use of something similar to this, where you only get so many tries to log in before you're locked out of your account until you can prove that you are not a hacker trying to bruteforce your way in.