April 2011 - Posts

Crypto-erase. Crypto-deletion. Cryptographic deletion. You've heard that it is instantaneous and foolproof. What is it, and how do you do it? Cryptographically erasing data is essentially getting rid of the encryption key to encrypted data. Obviously, a computer must be set up with disk encryption software like AlertBoot for this to work.

What is an Encryption Key and How Do I Get One?

Short answer: with modern encryption software, an encryption key is automatically generated for you, so all you have to do is sign up for an encryption package.

Long answer: encryption always requires a "key." This key is generally a string of characters (a very long, long string of characters) that shows you how to substitute numbers and letters, which is essentially what encryption is all about. For example, in what's known as the Caesar Cipher, you might have something like this:

This means that A gets substituted with D, C with F, Q with T, and so on. So, the word "cat" would end up as "fdw." This is not the most sophisticated of encryption systems, but works in a pinch to show what an encryption key does.

Now, imagine that the key is lost. How could you figure out what "fdw" means? Sure, it could be cat, but it could also be dog, fat, kit, sit, or any other three-lettered word (or even abbreviations like fyi).

Cryptoerasure

And therein lies the "instantaneous deletion" aspect of encryption: you can't tell what the message says without the key.

Encryption, besides ensuring that plaintext gets jumbled up, also ensures that analysis of long, encrypted texts (aka, cryptanalysis) won't reveal the secret message. Long story short, it tries to make the information appear as random as possible. As if it doesn't have any structure to it.

So, lose the key and you're left with what appears to be random data -- and for all intents and purposes, it IS random data. And when it comes to computers, random data is essentially deleted data. In fact, if you plug an encrypted disk into a random computer, the first message that pops up is whether you'd like to format it because it appears to be a brand new disk.

It should be noted, though, that if you have to follow federal, state, and professional organization regulations, you should pay attention to data disposal rules. Despite the power of encryption, you might find that when it comes to disposing of data, you can't just lose the encryption key and then sell the disk on eBay. Instead, you might be required to have it crushed, encrypted data or not.

Rumors are flying that credit cards obtained during last week's Sony PlayStation Network data breach are up for sale to the highest bidder in the on-line underworld. It was only yesterday that Sony had offered a FAQ on the issue and revealed the use of encryption on credit card data.

So are the rumors just that, rumors? Or is something else going on here?

"It is Not a Rumor"

At nytimes.com, senior researcher Kevin Stevens with Trend Micro had this to say:

Kevin Stevens, senior threat researcher at the security firm Trend Micro, said he had seen talk of the database on several hacker forums, including indications that the Sony hackers were hoping to sell the credit card list for upwards of $100,000. Mr. Stevens said one forum member told him the hackers had even offered to sell the data back to Sony but did not receive a response from the company.

Mashable.com noted that Stevens had also twitted, "It is not a rumor, it was a conversation on a criminal forum. I never saw the DB so I can’t verify if it is real."

I don't doubt Stevens but can we extend our trust to the hackers? Could it be that they're trying to pull a fast one while Sony mulls giving out more details? On the one hand, they are criminals. On the other, I do know that most such forums use the "eBay system" where reputation is paramount to making a successful sale today and in the future (they are criminals after all, so transactions can't be easy without a degree of artificial trust).

Consider what this implies for Sony, though. First off, it would mean that Sony lied because it publicly denied that the three-digit security code for credit cards was stolen -- because they were never asked for. You can't save what you don't have. The rumors run against this.

Second, it would mean that one of the worst encryption programs in the history of computing would have been used, seeing how the hackers managed to crack it in two weeks. I know how Sony has a penchant for creating their own stuff, but I doubt that they did so in this case. (Of course, there is the possibility that either the encryption key or a password fell into the hacker's hands. It's quite unlikely though).

You've also got to consider the possibility that the above is partly truth and partly fiction. The screenshot of such a conversation on krebsonsecurity.com, for example, shows that neither of them are actually holders of the data. So, it could be that most of the conversation is based on truth with a little embellishment (the CVV2) added in.

I still can't get over the fact, though, that Sony would lie to the world about the theft of credit card numbers. I mean, the truth on that will get out eventually, via fraudulent charges, so it doesn't make sense that the Japanese company would just set itself up for lawsuits and government investigations (it's a big profile company).

Sony has released a Q&A regarding frequently asked questions of the PSN data breach. Among other things, it revealed that their "credit card table" was protected with encryption software. However, their "personal data table" was not.

It was not revealed whether passwords were hashed or protected in any other way, although one would imagine that passwords were grouped with the personal data table.

Regardless, Sony is "strongly recommending" that these be changed. Like I noted two days ago, this is not necessarily a sign that passwords were stored in plaintext format, although the lack of denial (that they were saved in plaintext) the second time around is worrisome. They hadn't mentioned the use of encryption previously, but cleared that up in this particular Q&A, and I was expecting the same for the passwords. On the other hand, I'm not sure how many people have asked Sony whether passwords were encrypted or hashed (or if any did).

One thing that puzzles me is this:

Q: What steps is Sony taking to protect my personal data in the future? A: We’ve taken several immediate steps to add protections for your personal data...including moving our network infrastructure and data center to a new, more secure location, which is already underway. We will provide additional information on these measures shortly. [playstation.com, my emphasis]

Huh? As I understand it, this breach was an on-line hack. How's physically moving to another location going to improve security? Do they expect a full-frontal assault one of these days, where servers are torn and stolen from racks upon racks that I imagine Sony must have deployed?

That's quite unlikely. On the other hand, it wouldn't be the first time. But then again, I don't think that a company like Sony would be one to use a questionable data center to begin with.

A spreadsheet containing New York Yankees season ticket holders' information was emailed to nearly 2,000 people. The spreadsheet contained information on 21,446 ticket holders. When dealing with emails that hold quite a bit of data, it's always advisable that one look into data encryption like AlertBoot.

VIPs Not Affected

The NYY revealed in a public statement that credit cards, dates of birth, and SSNs were not breached. The spreadsheet contained names, addresses, phone and fax numbers, email addresses, and Yankee account numbers. Also, Yankees rep names and ticket package codes were breached as well, although these would be corporate data, not personal data.

Deadspin.com has noted that the information only affects "non-premium" seats (the spreadsheet excluded luxury suites and the first rows). The breach was accidentally instigated by an employee who tried to recall the email. As people familiar with Outlook know, it only works when both the sender and the receiver are using the same system.

If some kind of data loss prevention (DLP) software had been set up, or if potentially the spreadsheet had been saved in encrypted format, this breach wouldn't have occurred.

Such Data is Pretty Valuable

Overall, it doesn't sound like this is a calamitous breach. On the other hand, it depends on how you define calamitous. The lack of financial account information and SSNs means that the breach won't hit anyone's pocketbooks (unless some kind of spam/phishing campaign is carried out succesffuly).

You've got to admit, though, that the information would be somewhat of a boon to telemarketers: they know people's names, where they live, have a way of contacting them, and know that the love for the Yankees is what sets them apart from other people.

But, the information that can be gleaned goes further than this. I don't follow the Yankees, or even baseball for that matter, but apparently the Yankees' management keep their ticket sales figures close to the vest. Deadspin.com notes:

These numbers are fascinating in light of the Yankees' repeated refusal to comment on their ticket sales, at a time when the stadium is obviously not full every night. The contents of the files are ripe for analysis. Members of the NYYfans.com message board are already deciphering the data, and one person made an attempt at crunching some of the raw numbers.

Apparently, the non-premium season ticket revenue is (remember now, this figure is very rough...and possibly waaaaaay off the mark) $131,978,910.

Of course, it doesn't have to be off the mark. Someone who works in the industry, or is familiar with it, may be able to use the information and their background knowledge to make a more precise guesstimate. That the Yankee group won't reveal such figures implies that the data could be of use to someone.

What can I say? Encryption software works, and hindsight shows that the file ought to have been cryptographically protected. Mistakes will happen, so policies that ask employees to check carefully before sending e-mails don't always work (as in this case. The employee must have immediately realized what happened). A better data security policy may be to always encrypt files if they contain sensitive data and will be placed in an e-mail.

On the other hand, that too suffers from the fact that an employee has to encrypt it. The answer might be a DLP that will automatically pick up on particular types of information, and either stops it from being sent or encrypts it on the fly.

Massachusetts 201 CMR 17 -- currently the toughest data protection law in the 50 states, according to some -- has claimed its first victim. The Briar Group LLC, which owns bars and restaurants across Boston, has agreed to settle with the MA's Attorney General. Many took to calling this law "Massachusetts's data encryption law." Interestingly enough, this case has nothing to do with encryption (although there are certain practices that would be important when it comes to using corporate encryption software).

Penalty is for $110,000, Plus a Little Extra

The Briar Group -- which operates The Lenox, MJ O'Connor's, Ned Devine's, The Green Briar, and The Harp -- has decided to settle with Martha Coakley, Massachusetts's AG:

The judgment, signed on March 28, 2011, by Suffolk Superior Court Judge Giles, requires a payment to the Commonwealth of $110,000 in civil penalties; compliance with Massachusetts data security regulations; compliance with Payment Card Industry Data Security Standards; and the establishment and maintenance of an enhanced computer network security system.[mass.gov]

So, the penalty is actually far more than $110,000, although it could be argued that the Briar Group should have had all the other stuff in place already. Right?

Well, maybe. Interestingly enough the AG's press release also noted:

Although the data breach occurred prior to the effective date of the Massachusetts data security regulations, the data security standards set forth in the regulations were used in the settlement.

Technically, under the law, the Briar Group didn't have to have all those security measures because the "data encryption law" went into effect on March 1, 2010 and the Briar's breach took place between April 2009 and December 2009. Sure, they should have had those security measures in place as a way to deter potential data breaches; however, the law is not retroactive.

On the other hand, good data security ought to be practiced regardless of what the law states.

Undignified Practices

One of the complaints brought forth by the AG is that the Briar Group used default usernames and passwords (read: factory settings) on its point-of-sale systems (POS, computers that ring up invoices and keeps track of sales and inventory). This is a dangerous practice because default usernames and passwords are always the same. With the number of POS system manufacturers limited to a handful, you'd only have to attempt a very limited number of passwords before breaking into the POS successfully.

Another complaint was that the Briar Group's employees were allowed to use shared usernames and passwords, which is a variation of the first complaint. Again, it restricts the number of logins, which is bad from a security standpoint (think of it as having a master key that will open the front door, the back door, and the garage door). Plus, it makes it impossible to perform a successful audit if an internal problem is discovered: if ten people share a password and there's internal fraud, who of the ten is responsible?

Incidentally, the above are true for encryption software when it comes to password management. It's one of the reasons why AlertBoot prompts a user to change the default password the first time it's used. This way, the default is never used more than once for each system that is encrypted.

Last but not least, the AG noted that the company kept accepting credit cards and debit cards after it had found that their POS systems were breached. I don't think I need to explain why this is bad.

The Briar Group released a statement partially denying the AG's account of the situation:

"We took immediate and aggressive action steps, including: informing the major credit card companies of the potential breach, working with the nation’s leading data security company to identify any weaknesses in our data systems and make system upgrades to further secure customer data and cooperating with a federal investigation into this matter," the statement said. "We are confident that customers dining at one of our restaurants can safely use their credit cards." [boston.com]

I still haven't gotten around to reading Verizon's 2011 Data Breach Investigations Report (74 pages long!), but I have run across summaries of the report. Essentially, the number of breaches have increased dramatically but the number of compromised records have decreased dramatically. I'd say it's a win for data protection software, including drive encryption software like AlertBoot.

Until I read a quote from Verizon's director of investigative response:

"The FUD is out of control," said Sartin in a phone interview. FUD, short for fear, uncertainty, and doubt, is what the security business sells, suggests Sartin, who dismissed industry jargon like "advanced persistent threat" as a way to drive sales of security products and services.

"People find a rudimentary virus and they think the Chinese are out to get the Colonel's secret recipe," quipped Sartin. [informationweek.com]

It almost seems to imply that the security business sells, well, fluff. Which is a bold picture to paint, since one'd argue that it's these same tools that security businesses sell that have led to the decreases Verizon has noted. Or have they?

Arrests Made

In an interview, Sartin notes that the reason for the dramatic decline in breached records comes from the fact that the more capable hackers (not the best, just the ones that are most capable in creating mayhem) are under arrest or on the run. This is why over the past three years the records breached have decreased from 361 million to 144 million to 4 million.

These arrests were possible because companies' reactions to data breaches have changed. Sartin notes that large companies have tools in situ for dealing with data breaches that rival those used by forensic specialists. Hence, figuring out what's going on during a data breach doesn't take as long as it used to, leaving a smaller window of opportunity for hackers to escape.

This also explains why the number of breaches have increased over time: criminals are now targeting smaller companies that are not as well-equipped to deal with data breaches. Smaller companies usually have less customers which means less data on hand, contributing little to the "breached records count." It also means that the security exploits are not, and don't need to be, as sophisticated, leading to higher "turnaround."

Those are interesting conclusions. It seems to indicate that security tools are the reasons why the bigger breaches are, for the time being, a thing of the past (better security tools have led to arrests), not to mention that the lack of better security tools (by smaller businesses) is why breaches are on the increase.

I've got to admit, though, that if people are linking a virus with Chinese government hackers, they're probably engaged in FUD. On the other hand, there is plenty of FUD-sy sounding stuff that is real. For example, it's being reported that there appears to be a phone-based scam making its way through Texas after personal information was breached on-line.