June 2011 - Posts

PhyData LLC, a medical billing and management company has announced the data breach of 1,500 people. A laptop computer with sensitive data was stolen from an employee. The presence of drive encryption software like AlertBoot was not mentioned in the media. This is, without a doubt, a HIPAA breach.

Stolen from the Mall

According to the tennessean.com, a laptop computer was stolen from the trunk of a PhyData employee's car at the RiverGate Mall on May 7. PhyData is a medical billing and management company, and the employee's laptop contained names, dates of birth, SSNs, and medical record numbers for patients of Advanced Diagnostic Imaging, Premier Radiology, and Anesthesia Services Associates (mostly those who visited the companies between January 2009 and December 2010).

Commentators at the tennessean.com are expressing not only disgust but doubt:

Stolen from the trunk. That alone sounds strange when detailing where the thief stole it and wasn't drawing any attention, from busting in the trunk. When the true story comes forward we will see the employee left it unattentive (sic). [truone]

Which resulted in this reply:

I agree. Taken from the trunk? Was there signage on the auto? Why would someone open a trunk with so many other cars around and possible property in view? This IS NOT the whole story on this one. Maybe the paper doesn't have all the info? This is a really poor article. [TNBikerChick]

Tin-Foil Hatters: Trunks are Broken into All the Time

I would have replied on the site, but it looks one has to register, so I'll just comment here on this blog: There is no conspiracy. Usually, when the trunk gets busted it's because the driver parked and then placed valuable items in the trunk, thinking that it'd be safer. Someone in the parking lot -- possibly thieves looking to catch people placing stuff in their trunks -- watches the driver from the moment he enters the garage and, once they're sure the driver won't be back, go to work. After all, if an item weren't valuable, why would anyone go through the effort of putting it in the trunk?

Yes, the trunk is the safer place to put valuables if you have to park your vehicle; however, a lot of this safety comes from the fact that trunks don't have windows. Thieves generally break into a car when they know they'll have something to show for their efforts. If they know that it's worth their time to break into a car's trunk, they'll do so.

And why not? Supposedly, a skilled thief can force open a car's trunk in less than a minute (by picking the lock). An unskilled thief can also break-in in less than a minute, although it involves breaking the driver's seat window and using the trunk release button.

HIPAA / HITECH Compliance

PhysData is a medical billing company. As such, I'm sure it's safe to assume that it is a covered-entity or business associate that needs to follow HIPAA regulations. As a covered-entity, it must comply with the Security Rule in HIPAA, where an individual's protected health information (PHI) must be protected.

One way to protect such data -- and the only one that is granted safe harbor in case PHI is lost or stolen -- is the use of encryption software. Was it used? Its use hasn't been revealed and the 1,500 patients are being notified of the breach, so it might not be a stretch to assume that medical data encryption for laptops wasn't used.

PhysData better have a good explanation for this if the employee was authorized to carry around the now-lost data. After all, any covered-entity is supposed to evaluate whether the use of encryption is warranted or not; if not, it must have a valid (and documented) reason why it decided not to.

(As a personal observation, I'm not aware of any valid reasons why PHI-laden laptops that are taken outside a hospital setting are allowed to remain unencrypted.)

Informationweek.com notes that it's not hackers that people ought to be scared of when it comes to medical data breaches. That's because it's the theft and loss of physical devices like laptops and external hard disks that drives data losses. Thankfully, there's an easy solution for that: laptop encryption software such as AlertBoot.

Most Common Cause of Medical Breaches in the US

If you follow this blog, you already know this, but it's been confirmed once more: Based on the listing of breaches at the Department of Health and Human Services (where incidents involving more than 500 patients are listed publicly), 49% of the 288 listed HIPAA breaches involves physical theft. We're talking about a person stealing laptops, external hard drives, USB sticks, CDs, etc. These can involve car break-ins or a hospital's premises. Most of such breaches can be easily controlled with encryption software, its sole purpose and design being to keep data secure.

Another 14% of the breaches are attributed to the loss of such devices. Again, the use of encryption can mean the difference between an actual data breach (the device is lost and the data potentially read) and a breach on paper only (the loss of the device). Improper disposal accounts for 5%, and yet again its effects can be mollified by the use of proper cryptographic solutions.

In other words, the use of encryption could benefit the medical field in 68% of the cases. Granted, this number will change as more breaches are reported; however, whether the figure is 68% or 5%, the point is that a medical breach is a medical breach is a medical breach... no matter how many people are affected.

If one is not willing to invest in the best security tools, he or she has to invest in at least the reasonable ones. Now, HIPAA, HITECH, and the HHS don't really come out and say what these tools might be, except in one case: strong encryption. (I'm of the opinion that it's one of the more reasonable tools, which makes it the best).

Govinfosecurity.com notes that Governor Rick Perry has signed a healthcare privacy law that goes beyond HIPAA when it comes to patient health data. Various reasons helped spur the passage of the hotly-contested bill, including the lack of HIPAA enforcement, the recent Texas-wide data breaches, and the increased exchange in health data due to the HITECH Act. Among other things, it appears that the use of encryption solutions like AlertBoot is looked favorably upon, but not as a cure-all.

The new law goes into effect on September 1, 2012.

Increased Penalties

What really stands out about this law is how they've changed the financial penalties for violating the law. If covered-entities in Texas don't comply with HIPAA (a federal law) the state Attorney General is authorized to "institute an action for civil penalties" according to the newly amended sections 8(b) and 8(c) of the Texas Health and Safety Code Section 181.201:

Privacy violations are upped from $2,500 to $5,000 per violation "that occurs in one year...committed negligently"

$25,000 for each violation committed knowingly or intentionally

$250,000 for each violation where "a covered entity knowingly or intentionally used protected health information for financial gain"

$1.5 million for repeat offenders ("constitutes a pattern or practice"), plus a possible revocation of professional or institutional licenses

Needless to say, it's pretty tough.

Does the Use of Encryption Lend Any Advantages?

Yes. Safe harbor is not granted for the use of encryption software; this does not mean, however, that there are no advantages to its use. Under subsection (b-1), it is noted that (my emphasis):

(b-1) The total amount of a penalty assessed against a covered entity under Subsection (b) in relation to a violation or violations of Section 181.154 may not exceed $250,000 annually if the court finds that the disclosure was made only to another covered entity and only for a purpose described by Section 181.154(c) and the court finds that:

(1) the protected health information disclosed was encrypted or transmitted using encryption technology designed to protect against improper disclosure;

In other words, you don't get a get out of jail card, but the state will put a cap on damages if cryptographic solutions are used. Without the use of encryption, why, the sky is the limit when it comes to penalties assessed.

Note, though, that the law clearly specifies "encryption technology designed to protect against improper disclosure." This seems to indicate that someone will also judge whether the encryption software used was up to par.

This is a pretty smart move. In the past, laws regarding encryption have always been controversial because an accurate definition of what encryption is -- for legal purposes -- couldn't be established. The above definition puts an end to an active definition of what encryption is; at the same time, it seems to fall short from what HIPAA has done to define encryption (or, rather, strong encryption: basically, any encryption software that has been tested and approved by the NIST as strong encryption).

India is proposing a new Privacy Bill, slated for the next Indian Parliamentary session. There are a number of ground-breaking rules but I noticed that there is no specific safe harbor from penalties when data encryption software like AlertBoot is used to protect information. On the other hand, the law does specify that "every data controller shall ensure the security of data under its control by taking appropriate measures."

Lots of New Rules

While the bill is not yet officially open for public debate, it is being discussed in the press that it covers a range of issues:

Protections for health information of Indian citizens

Revocation of telecommunication licenses for any companies that illegally intercept phone calls

The creation of a Data Protection Authority of India that will monitor and enforce compliance of the bill

Fines of Rs 1 lakh and 5-year jail sentence for illegal snooping, and fines of Rs 50,000 and a 3-year jail sentence for anyone who was involved (I'm assuming indirectly).

Data Security

Every data controller shall ensure the security of data under its control by taking appropriate measures to prevent --

The loss of theft of, or damage to, or unauthroised destruction of such data; or

The unlawful processing of such data; or

The unauthroised disclosure (either accidental or intentional) of such data.

Where a data controller engages a sub-contractor to process the data on its behalf, the data controller shall ensure that the data entrusted to the sub-contractor under a contract arrangement is maintained in the same manner as if it is maintained by the data controller itself

Every sub-contractor shall be deemed to be the data controller in respect of the data under his control and shall deal with the data in accordance with the provisions of this Act.

Clearly, there is no mention on whether the use of encryption software offers safe harbor, be it form potential penalties or from having to send breach notification letters.

However, I get the feeling that the UK's own Data Protection Act had something of an influence on this particular bill. If true, there's a good chance that the Data Protection Authority of India (yet to be established) will judge favorably upon the use of encryption programs, something that the UK's own Information Commissioner's Office already does (and regularly includes as provision its Undertakings with breached entities -- the use of encryption, that is).

Problems?

I'm not about a bill that's not yet passed, but I did notice that many in the media are reporting this:

Taking a tough stand against government officers, the Bill proposes that "where an offence under this Act has been committed by any department of the Government, the Head of the Department shall be deemed to be guilty of the offence and shall be liable to be proceeded against and punished accordingly unless he proves that the offence was committed without his/her knowledge or he/she exercised due diligence to prevent the commission of such offence." [deccanherald.com, my emphasis]

While many would applaud this, it seems to me that there are problems with it.

First off, it asks one to prove the non-existence of something. If you know anything about basic logic, you know this is impossible. The simplest example is proving that black swans exist. Today we know that black swans exist, but prior to 1697, it was common knowledge that they didn't. The "evidence" was that all the swans people could find were white. Obviously, all you need to topple that argument is one black swan, which is what an explorer came across in Australia in the 17th century. Coming back to our government officials, there's no way that they can prove they didn't know.

Second -- which builds on the first problem I've pin-pointed above -- a subordinate could make trouble for his superior by committing an offense and then claiming that the superior ordered him to do it. It's his word against the superior's, and we've already shown that the superior can't prove, at least on purely logical terms, that he wasn't.

Granted, there is a due diligence clause, but these can be used or abused to suit particular needs, be it political or otherwise. That's not to say that department heads should not be held accountable for breaches. I'm just pointing out how it's a particularly hard task.

Regardless, if the bill passes, it would mean better privacy rights for Indian citizens.

The California Department of Public Health (CDPH) has announced a second major breach. Approximately 9,000 employees had their information downloaded to a personal hard drive, an unauthorized move by an employee.

The use of disk encryption software in this case is a moot point, kind of. Had the employee subsequently lost the hard drive, it would have been an important factor. However, the disk was not lost.

What is important is that the CDPH incident was discovered by its department's security detection system, once again providing evidence that there are numerous components that come into play when dealing with data security.

What I wanted to comment on, though, is the fact that the CDPH only managed to go public with the case 3 months after they were aware of the data breach. The reason given? According to healthleadersmedia.com:

Asked why the breach took three months to announce, CDPH spokesman Al Lundeen said in a telephone interview Friday that the incident required a lengthy investigation.

Sure, you could argue that these were not patients that were affected, but still... why is sensitive employee information (which includes SSNs in this case) any more important than sensitive patient information?

It appears to me that the CDPH is not playing with a full deck of cards.

Some troubling allegations have surfaced in the Sony data breach saga. According to a recent lawsuit filed in San Diego, Sony is accused of knowing that "it was at increased risk of attack." The evidence for such, per timesofindia.com:

It has already experienced smaller breaches, which means that Sony knew it was a target (but then, aren't all websites on-line targets all the time? Especially the mega-conglomerates?)

It fired employees in the Network Operations Center which is "responsible for preparing for, and responding to, security breaches" two weeks prior to the breaches

It installed firewalls and other security measures to protect the company data while not doing the same for its customer data

I'm not sure that the above evidence means that Sony knew anything; however, the last two are certainly troubling. Unless there's an explanation for it.

Downsizing in Bad Times, or Something Else?

Sony couldn't have known that firing employees in the Network Operations Center would have resulted in a breach two weeks later. On the other hand, if you have a laid-off employee with a chip on his shoulder, and he knows how bad Sony's on-line security happens to be (and let's not forget that the attacks Sony has sustained were very basic and evitable)....well, you get my drift.

Regardless of whether that occurred, the point is that everyone knows that the attacks are coming at some point. Firing a sizable number of people in charge of responding to an attack is a bad policy. At the same time, these are presumably the people who were supposed to ensure that the defenses were in place to begin with. We know those defenses were severely lacking. So, how useful would they have been when the attack finally came?

Maybe they were fired exactly because Sony finally wised up to the fact that these guys weren't doing their jobs (and hindsight tells us that they weren't).

That's not to discount a scenario where a (shortsighted) bean counter arrives to the conclusion that Sony doesn't need all these people waiting around for something significant to happen because nothing has happened...and probably never will. I mean, that's why full disk encryption software programs like AlertBoot are usually deployed at an organization after a data breach takes place, be it a stolen computer or missing external USB hard drive.

But, let's face it, Sony had some lax security in place. Giving the pink slip to all those people because they weren't doing their jobs could be a possibility.

Our Data vs. Their Data?

The third allegation listed above, that Sony protected their own data while not doing something similar for customer data, seems to be the more serious (and bizarre) accusation: one of the crown jewels of any on-going operation is an organization's customer list, and this is guarded jealously by most.

Much has been written about how it's easier and cheaper to sell (or up-sell) to an existing customer than to a new one. And, there's always the suits revolving around one company pilfering another company's clients, or where former employees start a new company and have stolen the old company's client list. I know of acquisitions where the main or sole purpose was the buyer trying to get its hands on the acquired company's client list.

So, client data is not something that takes a backseat to your own data, at least not usually. In fact, client data is not usually thought of as "their data" as it is imagined to be "our client data."

Which brings me to this point: client data is not usually stored separately from company data, completely siloed in a separate network. This in turn means that an attack on client data could be a means for entering the entire company network. Under such a scenario, it would be senseless to just protect "company data" and do nothing for the "client data."

So, if the accusations are true, I'm not sure what to think. Perhaps Sony was rolling out security and they started with company data first and was eventually going to reach the client data? Or were their data security plans really so terrible that Sony overlooked many of the elementary aspects of data security?