November 2009 - Posts

A UK man has been sentenced to nine months in jail for not revealing the password to his encrypted data. This was authorized under Part III of the Regulation of Investigatory Powers Act (RIPA), which makes it an offense not to reveal passphrases to information protected with encryption software like AlertBoot.

While such a move would normally stir arguments in of itself--in the US, as far as I know, you can't be jailed for not revealing your passwords...something about the right not to incriminate yourself--further controversy has been added because the man suffers from schizophrenia. Or at least, that's the claim of The Register. It's known that the man had a mental condition, but they are the first to report the actual condition this man suffers from.

Regulation of Investigatory Powers Act

RIPA went into effect on October 2007. Under Part III of RIPA,

"...a suspect [is given] a time limit to supply encryption keys or make target data intelligible. Failure to comply is an offence under section 53 of the same Part of the Act and carries a sentence of up to two years imprisonment, and up to five years imprisonment in an investigation concerning national security."

The law was passed in order combat terrorism and other serious crimes. There was an outcry then, and there is an outcry now over this law. As some have pointed out, this latest incidence won't lead to more supporters.

The clichéd reasoning behind justifying such a law is, if you've got nothing to hide, decrypting any protected information for the authorities to inspect is not a problem. (The technical reason for such a law? With modern technology, it's become nearly impossible to crack an encryption program.)

On the other side are the ones arguing, well, forget about unreasonable searches and the right not to incriminate oneself (perfectly valid concerns in any democratic state), what if you honestly don't remember the password? If you're in the habit of encrypting a design for the world's best toaster-oven because you're afraid of industrial espionage, and happen to forget the password to unlock it...should you go to jail for it? Just because some guy at the airport thinks you're suspicious-looking? After all, this airport security-guy may think, quite correctly, it's encrypted data, so it could be anything.

Who's in the right? Probably both sides. On this particular case, though, I'd probably side with the "unreasonable search" crowd. If you read The Register's account, it's quite clear that the police had nothing on this guy. Ultimately, he was charged with not cooperating with the police and other related matters.

Encryption Cracked?

There was one thing of note to me, personally. From The Register's article: "One file encrypted using software from the German firm Steganos was cracked, but investigators found only another PGP container."

Per Steganos's website, they only seem to have one file encryption product: Steganos Safe. And, the encryption algorithm behind it is AES-256. Either the investigators above got extremely lucky or what they managed to do is correctly guess at the password that gives access to the encrypted information (which, actually, also requires a certain degree of luck as well).

This is not Steganos's fault. All encryption programs, even AlertBoot, will fall to brute-force attacks (where passwords are guessed at, again and again) if a short or easy-to-guess password is used to secure access to encrypted data.

(And, the case illustrates the fact that encryption, in fact, does work: notice how the product from PGP could not be accessed. It probably also used AES-256; my guess, though, is that our jailed guy didn't use the same password, so the authorities came to a dead end. Hence the arrest and incarceration.)

A laptop computer issued to an employee working for the government of Belize was stolen during a car break-in. It is not know whether laptop encryption was used to protect the contents of the laptop.

Entire Population of Belize Affected

The laptop computer in question was issued to an employee in the Vital Statistics Unit. Among other data, it contained birth certificate information. For the entire nation, which kinda makes sense. (What does the Vital Statistics Unit do? If it's anything like the vital stats departments in any nation, it's probably in charge of records for births, deaths, marriages, and divorces, and the issuing of certificates for these.)

Other items stolen include a cell phone and the keys to the Vital Statistics Office, signifying that this was a random burglary where the thief or thieves netted more than they were expecting.

According to Belize's own statistical institute, the mid-2008 census showed that the country had 322,100 people.

On the one hand, it has affected the entire country, so it's a huge deal. On the other hand, it doesn't appear as such in terms of raw numbers: the breach experienced by BlueCross BlueShield last month affected over 800,000 physicians across the US--that's over twice the population of Belize!

Data Protection

There are several ways that the breach could have been prevented. To begin with, and most important, would be not to be carrying around a computer with the vital info on the entire population. I mean, it's like the director of the CIA deciding to take his confidential files with him to lunch...you just don't take your crown jewels with you out of a safe environment.

Another option would have been to use encryption software. It's quite debatable whether this is a better option than keeping data in safe environment (I would claim that it would be best to keep it in a safe environment and to have it encrypted...break-ins occur into buildings as well); however, in this particular case, the use of disk encryption would have meant more than adequate data security.

Hindsight is 20/20 they say. But, as a person who follows news on information security breaches, I can tell you that I pretty much expected to read "data breach" when I saw "car," "burglary," and "laptop."

What is up with medical organizations these days? Eisai Inc. has alerted the Attorney General of New Hampshire that a stolen laptop contained the information of the company's employees. The laptop itself is password-protected, but data encryption software like AlertBoot endpoint security was not used to protect the contents.

Laptop Stolen from Employee's Car

The breach occurred on October 21, when a laptop was stolen from an employee's car. The incident occurred in New Jersey but the letter to the AG states that 27 New Hampshire residents were affected. A file in the laptop contained names, addresses, and Social Security numbers of employees (past and present) as well as "applicants." It's not quite clear what kind of applicants they're referring to (I take it to mean job applicants).

More Breach Announcement Letters to Come?

In my opinion, there is good reason to believe that this breach involves more than the 27 NH residents above:

Generally, employee information is not saved on separate files, or at least, not as different files separated by state residence. Chances are the 27 employees above were a subset of a larger list. It just happens that they're the ones being mentioned to the New Hampshire AG; other state residents don't really figure into the picture when you're contacting a particular state's Attorney General.

Eisai's website has an interactive "career opportunities" page which allows one to search for job opening within the company. While they subdivide the US by regions and whatnot, they also list NJ, MA, MD, PA, NC, CA, and D.C. specifically as locations with possible job openings.

If the lost computer contained some kind of master list, I would imagine people in these states would be affect as well, not to mention residents of bordering states as well. (NH, while not listed above, lies less than 10 miles from Andover, MA where Eisai has a research institute.)

Also, considering Eisai's US subsidiary is headquartered in New Jersey, I'd say an argument of a master list being lost is pretty persuasive.

The lack of file encryption on the sensitive file means that there is a potential for affected employees and applicants to become ID theft victims. Eisai has expressed doubt, seeing that there was "no reason to believe that any personal information has been or will be accessed or misused."

I would argue, though, there is there is no reason to believe that it will not be accessed or misused. The times, they are a-changin', and today's petty thieves don't seem to be looking for just a quick turnaround.

One thing to congratulate Eisai about: they didn't wait 6 months to make the breach public, unlike another medical organization that has recently been in the news over a data breach.

Most probably, yes. As far as I know, state laws in NJ, MA, MD, PA, NC, and CA exempt a company from publicizing a data breach if the data in question was protected with encryption software. (Not sure what the status is for D.C.)

On a more practical and employee-centric level, data encryption would have pretty much guaranteed that the information would be inaccessible to the laptop thief, which is not a claim that anyone is willing to make about password-protection. Indeed, this is probably why the state exemptions above were given for encrypted data. This would probably ease the minds of employees better than any credit monitoring offered by the company.

Health Net's loss of a hard drive that was not protected with drive encryption is really beginning to pay dividends. The wrong kind of dividends. Arizona's Attorney General has now announced that he will investigate whether Health Net has broken any state laws due to the breach.

Second Attorney General Investigating Company

Attorney General Goddard of Arizona is the second state AG to announce an investigation on Health Net. The first one is Richard Blumenthal, Attorney General of Connecticut. Both AGs have expressed dismay and displeasure at the fact that Health Net took six months to contact the affected.

If you'll recall, Health Net lost an unencrypted hard disk with information on over 1.5 million patients across four states. The hard disk contained Social Security numbers and bank account numbers, meaning this is not a run-of-the-mill medical breach. Indeed, I don't see--in terms of the breached data--how much different it would be from a breach involving a financial institution. The ramifications of this breach are probably direr than, say, losing a bunch of x-rays.

Approximately 316,000 Arizona residents were affected by this breach, including past policy holders.

In the Most Expedient Manner Possible

I've read quite a number of law passages relating to data breaches and their notifications, and as I recall, in all cases there has been a provision to the effect of alerting affected state residents "in the most expedient manner and without unreasonable delay." Unreasonable delays are generally introduced when law enforcement requests that the breach not be made public while an investigation takes place (which makes them reasonable, I guess).

In the case of Health Net, however, there is no mention of such a request. Based on what I've read so far, it looks like Health Net actually needed the six months to figure out what was on the stolen hard disk and who to notify.

If so, couldn't one argue that it was a reasonable delay? They couldn't notify anyone any earlier because they didn't have the list. On the other hand, I did note before that six months seems like an inordinately long period to reconstitute the hard disk's contents.

All of this would have been moot, though, if encryption software like AlertBoot had been used on the missing drive. Question: Are New York's and New Jersey's AGs also going to announce an investigation?

You've probably heard the news by now that there is an iPhone worm making its way out of Australia. Initially, it started out as a prank. However, many security professionals pointed out that this code can be modified to do some real harm.

Now, those fears have come true, with a second iPhone worm out in the wild. This one will also affect only those iPhones that have SSH installed and that have neglected to change the root password, "alpine." This new worm behaves like a real bot, regularly checking back with its master for additional commands and updates. It sounds like someone took the original code and just made updates to it.

What does this have to do with data encryption? Absolutely nothing...and yet, maybe everything. It certainly could be construed as an example on why password-protection is not as good as encryption, and why some effort should be given when creating passwords.

Password for Second (Modified) iPhone Worm - New Password Is "ohsh**t"

One of the things the new worm does is change the root password, from "alpine" to something else. Why would the hackers do this? So you won't be able to remove the virus. However, Paul Ducklin, over at Sophos, has found the new password to be "ohshit."

It looks like Mr. Ducklin used a hex editor to do some sleuthing, and then used a password cracker to find the password via brute force.

I often point out how password-protection is worthless when it comes to protecting your sensitive files. Hex editors can be used to reveal the passwords, depending on the situation: for example, passwords protecting Microsoft Word files can be viewed using a hex editor. In those instances where a password to a file cannot be seen (something other than Word), it could be modified using the hex editor, also allowing easy access. In this iPhone worm instance, it allowed the analysis of how the new worm operates.

Using A Password Cracker

The new password for accessing the root account is quite...prosaic. I would imagine that it had taken less than a couple of hours to figure out the password using a dictionary attack. If the people behind this worm had really been serious, they would have made that password a little more complicated.

I mean, no numbers? No capitalized letters? At least? Password-wise, these iPhone botnet-masters are wannabes.

This, though, actually highlights another weakness when it comes to password-protection. Generally, when it comes to password-protection, there is no limit on how many times the wrong password can be entered. Hence, a brute-force attack can be carried out: all possible combinations are tried until something "clicks."

Granted, depending on the password, it may take forever to figure it out. However, most people, including our hackers above, don't give password security much thought. How does this compare with encryption? After all, encryption software also uses passwords for access.

The difference lies in the implementation. With a solution like full disk encryption, typing the correct password reveals, if you will, the encryption key and allows the information to be decrypted. It's this encryption key that's really protecting your data. Hence, one can put in a limit on how many wrong guesses of the password can be made before you dictate that the encryption key not be revealed, ever.

At that point, anyone who really, really wants to read your data would have to guess at the encryption key, which would take centuries to figure out with current state of the art technology. You, however, as owner of the data, would just need to present the string of characters forming the encryption key to decrypt the information. So, under encryption, the use of a password is for mere convenience; it's not there to protect anything, unlike password-protection, where the password is the protection (maybe).

If you'll recall, BlueCross BlueShield announced a data breach last month, when an employee lost a laptop with the information of all doctors in their network (apparently, something like 90% of all doctors nationwide). While BCBS uses drive encryption software to secure data, it was in vain: the employee had downloaded the data to his personal laptop.

Not only is an Attorney General (Connecticut) looking into whether BCBS broke any laws, now they've got the AMA opining as well--and, in their opinion, BCBS should offer 5 years' worth of credit protection.

A policy adopted by American Medical Association House of Delegates,

...calls for the Blues association to offer at least five years of credit protection for all affected physicians, offer more than one company for protection, raise the amount of ID theft insurance and publicly report confirmed cases of identity theft.

The national Blues plan also should provide affected physicians easy access to credit-monitoring reports without cost, and give legal protection and indemnification to doctors for any losses resulting from the breach.

I can tell you right now that that last part is not happening. I may not be a lawyer, but I've learned enough to know that no company in the US goes about indemnifying stuff if they can help it. My guess is BCBS is going to, at least, fight that last provision tooth-and-nail.

What Would It Cost To Offer Credit Monitoring?

To date, we know that 850,000 doctors were affected. Of those, 136,000 to 187,000 physicians used their SSNs as their tax IDs or NPI numbers. BCBS is offering to pay for two years of credit protection for physicians at risk (the AMA seems to be implying that the offer should be five years for all 850,000 doctors, though).

Assuming that BCBS can get rock-bottom prices of $5 per doctor,

2 years and 850,000 doctors: $8.5 million

5 years and 850,000 doctors: $21.25 million

2 years and 136,000 doctors: $1.36 million

5 years and 136,000 doctors: $3.4 million

Of course, this is assuming that 100% of the physicians decide to sign up for credit monitoring. (And why wouldn't they? After all, it's not just the BCBS that goes around losing data. Credit monitoring means monitoring for all instances of weird financial shenanigans, right?)

Remember, folks: data security is not just about using encryption software and calling it a day. There's more to it, like data monitoring, that requires constant vigilance.