June 2013 - Posts

It looks like the Netherlands will get a greater impetus for the use of encryption software sooner than later. According to many sources, the Netherlands' parliament has in front of it a proposal for reporting data breaches to the Dutch Data Protection Authority (DPA or CBP, College Bescherming Persoonsgegevens). Failure to comply with the proposed legislation could mean a monetary penalty of up to EUR 450,000.

DPA Overseeing Breaches

The newly suggested law, if it passes, would give oversight of data breach notifications to the DPA. Currently the ACM (Authority for Consumers and Markets; the Netherlands' consumer and competition regulator, akin to the US's FTC) is in charge when it comes to breach notifications.

Furthermore, the new legislation proposes a change in who needs to report data breaches. Under the new law all organizations, both public and private, will have to report a data breach to the overseeing authority. Furthermore, according to twobirds.com, for breaches,

that may have a negative impact on the data-subjects’ privacy, [organizations] will have to [notify] these data subjects as well

The site further goes to note that,

This implies that the data subjects will not have to be notified if the controller has encrypted the data in an appropriate manner, i.e. in such way that non-authorised persons will not be able to access the data.

Note how it reads "encrypted the data in an appropriate manner." The implication here is that you can't just use any encryption. Failure to follow the law subjects organizations to a maximum penalty of 450,000 euros.

Examples of what types of data breaches might be subjected to the law are given, per the site twobirds.com, and the DPA promises to issue guidelines.

What Kind of Encryption?

So, if it cannot be just any kind of encryption, what kind of encryption would satisfy the condition of "encrypted in an appropriate manner"? Obviously, it will depend on the situation. If you are sending something over the wires (data in motion), the required encryption will be different from disk encryption for data-at-rest.

For the latter, at least, people should be looking for a minimum encryption strength of AES-128. It is not only the official encryption algorithm for government secrets, it has been studied (and is being studied) by many for backdoors and weaknesses. The conclusion is that the algorithm is extremely secure and will be for years to come.

In addition to the above, it might be useful to use encryption that has been FIPS 140-2 validated. Although FIPS is an American standard meant for non-military federal government data, it must be recognized that sensitive personal data is the same regardless of which country one happens to be in. As long as you're looking to protect everyday yet sensitive data, one couldn't do much better than FIPS

The theft of a laptop computer from the King County Sheriff's Office (Seattle, WA) has triggered a data breach involving approximately 2,300 people. It's the type of data breach that is easily prevented via the judicious use of laptop encryption like AlertBoot Full Disk Encryption. According to a spokesperson, "there was no malicious intent."

Car Theft

It's the oldest story in the book: computer gets stolen from car. Or, rather, truck, in this case. According to komonews.com:

The laptop and a personal hard drive were full of case files, including personal information about thousands of crime victims, suspects, witnesses and even police officers.

The laptop was stolen last March from the backseat of a detective's undercover pickup truck.

It took KSCO nearly three months to notify victims because "they had to figure out who they needed to notify." Normally, my cynical self would claim that this is hogwash. Washington state, like most states that have data breach notification laws, allows the dissemination of the breach via public notice (like newspapers, the television channels, etc.) if an organization is left without specific ways of notifying victims.

However, seeing whose data was involved, it stands to reason that notifying them should be done discretely.

Violation of Policy

The detective in question "violated KCSO policy" although it wasn't specified what the policy is. Is it (1) always using encryption softwarewhen sensitive data is stored on a digital storage device, like a laptop or an external hard drive, (2) not storing any sensitive data on devices that were not issued by the Sheriff's Office, (3) never taking sensitive data outside of the office, or (4) a combination of the above (or even something else)?

It's obvious to me that the type of policy matters. If it's one of those policies that look good on paper but have zero feasibility, it's hardly fair to the detective at the center of this data breach, isn't it?

Bueller?

According to komonews.com, "this was not the first data loss [the Sheriff's Office] had, but it was the largest." Sigh. Well, at least there is this: the KCSO was in the process of deploying encryption software on all department computers (60% of them encrypted at the time of the theft). Obviously, the stolen computer was not cryptographically protected.

Seeing how three months have passed since then, one can only expect that that figure is now at 100%.

An easier approach may have been the use of AlertBoot FDE. Because the deployment process occurs over the Internet, any computer that required encryption could have been protected from any internet hotspot. The usual logistics of having to bring in the laptop to the IT department, dropping it off, and picking it up would have been superfluous.

Sang Mun (no relation), a graduate of the Rhode Island School of Design, has released and made available for free a set of fonts designed to be "more difficult for data collectors... to decrypt," according to vice.com. While it fails to provide the level of data protection that one expects from mobile device security like AlertBoot, its roots, one could argue, lie in the hacker culture of the 1980s and 1990s, when leet speak flourished.

Confuses Machines, Maybe Even Humans

The creator of the font stated, according to vice.com:

The project started with a genuine question: How can we conceal our fundamental thoughts from artificial intelligences and those who deploy them?" he writes. "I decided to create a typeface that would be unreadable by text scanning software (whether used by a government agency or a lone hacker)—misdirecting information or sometimes not giving any at all. It can be applied to huge amounts of data, or to personal correspondence.

You can check out the font here. Maybe it's just me, but it seems that some of the individual letters make it hard for people to read it as well.

has no illusions that even a clever cryptographic font—which he says you can use in email messages to shield them from snoops and font-recognition bots—will remain encoded for long. They're not meant to be long-term tools with which to combat the NSA. Rather, he views them as an awareness-raising measure.

The last time I was interested in OCR technology, the field had a long way to go and certain fonts lended themselves to better scanning and conversion than others. Perhaps Sang Mun's fonts have taken this into account but I wouldn't be surprised if the NSA didn't have already something in place already.

A "Solution" That's as Old as the Hills

This new font is a not a truly novel idea. In the 80's and 90's hackers (and hacker wannabes) already explored such tactics, although it didn't involve typography per se. Rather, they substituted similar looking characters, such as a zero (0) for an oh (o or O) or a five (5) for an es (s or S); in certain cases, whole words were substituted, like "leet" for "elite."

The reasoning behind this was that certain government organizations, like the NSA, had significant processing power (they still do, quite obviously) and using non-standard words would throw a wrench in the works when it came to online communications tapping.

The reality, though, is that leet speak probably didn't slow them down a bit. In the end, efforts like Sang Mun's might truly be about getting a conversation going.

Researchers have found that the cache folder of your browser, regardless of which one you use, could very easily contain personal information. An easy way to counter this problem is to clear your cache. Or, to use the private mode (aka, porn mode) that is present in all major web browsers. Using laptop encryption like AlertBoot cloud managed FDE for something like this would be overkill (but it would still protect a computer user, since the encryption prevents anyone from accessing the computer if lost or stolen), not to mention it doesn't help against online attacks.

Check Images, Credit Reports, Prescription Information Left Behind

According to latimes.com, researchers found that 21 out of 30 company websites "failed to use the correct technique to block sensitive transmissions from being stored on a computer or smartphone." Companies included ADP, Verizon, Scottrade, Geico, Equifax, PayPal, and Allstate.

What's a browser cache? It's a folder, expressly used by the web browser, to temporarily store data. For example, if you're a fan of cnn.com and visit their website regularly, chances are that a copy of CNN's logo (the one at the top center of the page, with a world globe next to it) is stored in your browser's cache. Why? Because it's faster to read a copy of the image from your computer as opposed to downloading it from CNN's servers. This gives your web experience a speed boost. This is true even if you have fiber-to-the-home, although it's not as relevant as when you were surfing at 56 Kbps.

The thing about the cache, though, is that it's an unprotected folder. Anyone can get into it. Hence, it's also a good idea to ensure that sensitive information does not get stored in it.

Website developers and operators have the tools and means to ensure that sensitive data does not get temporarily stored. The researchers who carried out the study,

called on website developers to immediately audit their code to make sure only basic data is being stored in a browser’s cache. He said browser developers should also move away from the current standard of caching everything. Instead, website developers should have to explicitly say what they want to save.

Long story short: it's not your fault. It's the fault of developers. On the other hand, developers may have relied (a little too much) on their experience to coast on security: in the past, certain browsers made it a point not to store https traffic, but this default was reversed on its head with subsequent updates.

A bad move, if you ask me. It's https data for a reason. On the other hand, my Facebook connection is also https, and, believe you me, there's very little in there that requires security.

Digging Up Deleted Files

I'm frequently reminded of what a treasure trove the browser cache happens to be because I make it a point to run data recovery tools on my laptop computer. Basically, I'm trying to see what I can recover of my own personal data. A self-audit, if you will, to see my risk exposure level.

Invariably, the browser cache tends to be the most fun when going through such an assessment. Despite the fact that it's supposed to clear itself, I generally find stuff in there that I looked up six months ago, random images of women in bikinis (thank you scummy ads for penny stocks on certain news sites I visit), etc.

Clearing the cache is something I do once in a blue moon, though. No way that I'm going to be doing this on a daily or even monthly basis. It might seem like a bad move, but generally, I feel a bit secure knowing that my computer is protected with full disk encryption, so in the event that my computer is stolen, no one will be able to easily look up any data whatsoever.

One of the things that irks me about the computer data security business is the following claim: I don't need computer security like laptop encryption because I trust my employees. To use such data protection tools would imply otherwise.

When facts are laid out, such as 33% of all data breaches (or whatever stat is being offered for that day, week, or month) being initiated by employees – whether they had ill-intentions or not – the response tends to be "Those companies must have hired the wrong people."

The Case of SynerMed (and a Big Bulk of the Companies that Experience Data Breaches)

This constant state of denial, from well-meaning people who I would love to work for (the powers that be love employees so much they're willing to make a boneheaded move to make workers feel better? Sign me up!), is probably one of the leading reasons why we have so many data breaches, and results in a typical story like this one from the wsj.com:

Mr. McLachlan's company has had to learn some security lessons the hard way. SynerMed reported last week that thieves stole a laptop containing records of emergency room visits from a worker's car, where it was left overnight, potentially exposing the records of 3,100 patients. Mr. McLachlan said because employees are not supposed to store patient data on local hard drives, it did not have a policy to encrypt laptops.

The wsj.com article goes on to list other medical data breaches that have occurred in California this year. SynerMed is not alone in being remiss when it comes to data breaches. However, SynerMed happens to be a healthcare technology vendor. The irony, which shouldn't be hard to find, is palpable. (The company has decided to encrypt it laptops after the ordeal).

The thing is, at the end of the day, SynerMed hasn't done anything wrong. Yeah, they had a data breach, but only because of some sticky-fingered scoundrel who makes it his business to break into cars. Under the circumstances, pointing fingers at SynerMed is to engage in victim-blaming.

It's Not About What You Think of Your Employees

But I'm sure there is a small (or large) corner of our mind where we're pretty sure that SynerMed is to blame for the data breach. Because it could have been avoided so easily by protecting the computer's hard disk with laptop encryption software, and because computers being stolen out of parked cars is not some kind of new revelation. It happens, on average, every day across the US.

SynerMed is a technology vendor. They know better (at least, I assume they do). Why didn't they use encryption? "Because employees are not supposed to" blah blah blah (that's not a typo. I meant to type blah blah blah because this reason, which I admit could have been quoted out of context, is blather. Plenty of employees do stuff that they're not supposed to do. That's why people get fired, demoted, transferred, or given the chance to resign).

It's about whether you value your clients or not. Who is put at risk in the event of a data breach? Who bears the brunt if that data breach turns into something bigger? Even with all the laws and regulations, it's not your employees.

According to the washingtonpost.com, "law enforcement officials are demanding the creation of a 'kill switch'" for stolen phones, citing statistics where one in three robberies involve mobile devices. The New York Attorney General was quoted as calling the current state of phone robberies an "epidemic."

The importance of smartphone security is very well known and understood at AlertBoot, but I must say that the stat has taken me aback. One in three robberies involves a cell phone? The implication is that one could cut all robberies by 33% if one could introduce some kind of phone theft "demotivator."

State AGs "Prepared to Deepen" Inquiry

The Attorneys General of New York and San Francisco announced the formation of the "Secure Our Smartphones Initiative" coalition, with the aim of pressuring companies to introduce a phone kill switch that would "dry up the secondary market in stolen phones."

The kills switch would work the same way as cancelling a credit card when it gets stolen. The AGs also noted that "the general public should not be forced to pay more for smartphones that have a kill switch," which also corresponds to the credit card model, and semi-threatened:

"We're prepared to deepen our inquiry if that is appropriate," Schneiderman said, though he would not elaborate on how far his office might go to ensure that manufacturers comply with the coalition's demands.

Apple, the manufacturer of iPhones and iPads, recently introduced a similar feature in its upcoming iOS7 operating system for mobile devices. However, the AGs noted that they've "been led to believe that it is not a kill switch."

What IS a Kill Switch?

Of course, the problem may lie in what they mean by a "kill switch." Is it something that will render the phone inoperable? If so, I can see why the industry might balk at such a feature. Essentially, there's no way to guarantee the inoperability of a device from a remote location.

A kill switch is, at the core, an instruction set that commands a smartphone (in this case) to stop working. For that command to reach a phone the device must be connected to the internet or a cellular network (or something that allows the delivery of the command).

Shield the phone from all networks, which is not that hard to do (going underground, like into the basement, will work for most people), and the kill switch doesn't work. Imagine all the lawsuits that will be filed around the fact that "the kill switch didn't work."

European Model

This is not to say that a kill switch model will not work towards curbing phone theft. Many countries already operate blacklists for stolen phones, where a stolen phone's IMEI number (a phone's unique hardware identifier) is blocked from being activated by the carriers.

The introduction of the blacklist was followed by a significant reduction in phone thefts, since stolen phones cannot be activated and thus lose their raison d'être, and thus a reason for stealing them.

However, it's not a complete success for a number of reasons. First, a smartphone that cannot be used as a smartphone still makes a pretty decent music player and mini Internet browser – I should know as an iPod Touch owner. Second, a phone that cannot be activated in one country could be activated in another. This is how international smuggling rings make their money if they deal with smartphones. Third, some people steal stuff without thinking. Others, they just want to watch the world burn...

Incidentally, the second and third reasons I've listed above are the why MDM smartphone security is still necessary even if blacklists and kill switches are in place.