Lessons From LabCorp

As I wrote last week, LabCorp, the mega medical lab testing company (mega as in revenue around $10 billion last year) was breached and they have provided some interesting insights as they have been forced to detail to the SEC some of what happened last week when they had to shut down large parts of their network unannounced, putting a stop to testing of lab samples, both in house and on the way.

From what we are gleaning from their filings, they were hit with a ransomware attack, likely a SamSam variant which seems to have an affection for the healthcare industry.

They claim that their Security Operations Center was notified, we assume automatically, when the first computer was infected.

That, by itself, is pretty amazing. I bet less than one percent of U.S. companies could achieve that benchmark.

Then, they say, they were able to contain the malware within 50 minutes of the first alert. That too is pretty amazing. In order to that, you have to know what you are dealing with and how it spreads. Then you have to figure out which “circuit breakers” to trip in order to contain the malware. The City of Denver was hit with a Denial of Service attack a couple of years ago and it took them, they say, a couple of hours to figure out how to disconnect from the Internet. That is more typical than what LabCorp was able to do.

The attack started at around midnight, of course, when the least number of people were around to deal with it. If you factor that in to the 50 minute containment time, that is pretty impressive.

However, in that very short 50 minute interval, 7,000 systems were infected including 1,900 servers. Those numbers are not so good. Of the 1,900 servers, 300 of these were production servers. That is really not so good.

One of the attack vectors of SamSam is an old Microsoft Protocol called Remote Desktop protocol or RDP.

RDP should never be publicly accessible and we don’t know if it was here and if used internally, it should be severely limited and where it is needed, it should require multifactor authentication. While we don’t know, it is likely that this was the attack vector and they did not have multifactor authentication turned on. Hopefully as part of their lessons learned, they will change that.

Within a few days they claimed they had 90% of their systems back. It is not clear whether that is 90% of 7,000, which would be quite impressive or 90% of 300, which would be much less impressive but still good.

So what are the takeaways from this?

These conclusions are based mostly on what we can interpret, since they are not saying much. This is likely because they are afraid of being sued and also what HIPAA sanctions they might get.

They seem to have excellent monitoring and alerting since they were able to detect the attack very quickly.

They also must have a good security operations center since they were able to identify what they were dealing with and contain it within 50 minutes.

On the other end of the spectrum, the malware was able to infect 7,000 machines including some production machines. They probably need to work on this one.

Assuming RDP was the infection vector, that should not have happened at all – they lose points for this one.

They were able to restart a significant number of machines pretty quickly so it would appear that they have some degree of disaster recovery.

On the other hand, given that they had to shut down their network and stop processing lab work, it says that their business continuity process could use some work.

Finally, they claim that they were able to KNOW that none of the data was removed from the network. I would say that 99% of companies could not do that.

Overall, you can compare how your company stacks up against LabCorp and figure out where you can improve.

Using other company’s bad luck to learn lessons is probably the least expensive way to improve your security.