Continuous monitoring is enough for compliance, but ISN’T enough for securing data

Continuous monitoring is enough for compliance, but ISN’T enough for securing data

Every 4,000 miles or so I bring my car into have the oil changed, the brakes checked and tires rotated. Why? Because I know if I leave it to chance, at some point down the road something much more devastating will affect the car. Many of us follow this simple preventive best practice.

Then why is it major corporations and modest enterprises alike wait until their security is breached to address growing concerns of data theft, private information leakage or worse? Many of these companies spend hundreds of thousands of dollars in various security initiatives (especially those bound by a regulatory compliance agency), but still succumb to breaches that cost on average 3.8 million dollars (Ponemon Institute figure) per occurrence to address.

Two instances dropped into my in box this week, a medical center in Long Beach, California and a Medicaid office in New York State both experienced similar types of breaches that, in my opinion, were completely preventable.

It boils down to continuous monitoring…and that practice doesn’t go far enough.

Continuous monitoring is the cornerstone of many compliance mandates. You find it in HIPAA, PCI, FISMA, etc. Something–usually an archival solution gathering sys-log files—must collect records of every event that touches a network perimeter. For a medium size health care facility, that could be more than 500 events per second. For larger companies, like the Long Beach Medical Center and Office of Medicaid Inspector General, the likelihood of continuous activity is 5X that amount. That’s a lot of lines of code to comb through to find that breach.

Many hospitals and health care organizations are under great strain to maintain certain security and privacy protocols because of these compliance laws. They spend a great deal of time and money in security, but way too often, we hear of a breach by some facility or that company. There must be a disconnect somewhere.

I think the disconnect is how the term continuous monitoring is defined and applied as a preventive best practice. Mandates state that systems must be continuously monitored, but it can be vague in terms of how often those system logs are reviewed. I know of some organizations that only do it once per month. I know others that don’t do it that often. This is not to say that there is no vigilance out there, but the overarching issue is that no matter how often sys-logs are reviewed, it is done in a rear-view mirror. These are events that have already occurred. If there was a breach or any kind of suspicious or malicious activity, the horse has already left the barn. The damage is done.

Of course continuous monitoring is important. But it doesn’t go far enough. It is not truly preventive. The key is not continuous monitoring, but real-time monitoring–24/7/365.

But many companies don’t have the man-power, the expertise or the technology space to achieve this. And those that do, there is the invitation of extra costs. So they ask, if I am IN compliance, what is my motivation to incur more costs and expend more resources? Anyone who has ignored the red warning light on a dashboard saying it’s time for an oil change might be able to tell you. And so might the auditors dealing with the Long Beach Medical Center and New York Medicaid office breaches. You might be in compliance by the letter of the law, but not it’s spirit.

However, those that say they need to spend more money and resources aren’t looking to the cloud. They might not be aware that the SIEM and Log Management developed, delivered and managed from the cloud exponentially increase their security capabilities while significantly limiting costs and headcount. They might not be aware that security-as-a-service is that real-time monitoring enhancement in the “sky” that immediately creates an alert the moment suspicious activity occur and initiate preventive protocols to better safeguard private records. They might not be aware that it can stitch together separate and disparate data silos under a centralized management portal to make spot reviewing, audit reporting and day-to-day maintenance much easier. Honestly if you can accomplish better results for less budget, then it is your duty to at least perform due diligence and explore the option.

This is important in terms of the root causes of the breaches I mentioned earlier. In both cases, the breaches seem to stem from internal sources using unregulated email.

How would real-time monitoring from the cloud have prevented this? Simple, if approached holistically. What is necessary is that credentialing and provisioning functions such as those found in identity management (IDaaS) and enterprise access control (access management) be leveraged with Log Management and correlated through SIEM (intrusion detection, alerting). It seems like trying to take a drink from a fire hose, but once all the data is leveraged and all the unique protocols, escalations, provisioning, rights and rules are centralized, then real-time monitoring can assess (removing all the false-positives and white noise) true threats to the network and take appropriate action…BEFORE the damage is done.

So my call to action is that it is time to reassess what it means to continuously monitor. And that means to find ways to start monitoring in real time and applying preventive and PROACTIVE best practices.