Thoughts On Delivering Meaningful Outcomes in Security and Privacy

Tag: Security
(page 1 of 3)

If you are like most medium or large healthcare providers these days, your Electronic Health Record (EHR) environment is likely a very complex one. Such complexity brings with it a fair amount of difficulty in monitoring the environments for security incidents.

Monitoring for security incidents is different from privacy monitoring

Many such healthcare providers have also likely invested in privacy monitoring solutions over the last few years. These investments have been driven largely by the HIPAA Security/Privacy rule or Meaningful Use mandates as well as the need to be able to identify and respond effectively to privacy incidents or complaints.

Privacy monitoring use cases fall into a fairly limited set of categories – e.g. snooping of neighbor, workforce member or celebrity records. Given the nature and the somewhat narrow definition of these use cases, many organizations appear to be doing a good job in this respect. This is especially the case when organizations have implemented one of the leading privacy monitoring solutions.

While such organizations have notable success with monitoring for privacy incidents, the same can’t be said for monitoring of security incidents. This is so despite the fact that most of these organizations have invested substantively in security – be it security monitoring solutions such as Security Information and Event Management (SIEM) orservices such as third party managed security services.

Where is the problem and what might we do about it?

In our experience, the lack of effective security monitoring capabilities across EHR environments can be usually attributed to the lack of appropriate security logs to begin with. And, it is usually not a straightforward problem to solve for more than one reason. The most common reason is the complex nature of the applications and their diverse sets of components or modules. Many of the EHRs were not designed with good security monitoring, in our view. One can also point to the rather complex and custom workflows at each organization that these EHRs support.

Solving this problem usually requires a specialist effort by personnel who have a strong background in security (and security monitoring). We also need who have specialist knowledge and experience with the respective EHR applications. After all, each EHR application is unique in how the vendors have implemented their security and security logging features.

How could we help?

Our RiskLCM services can help develop a strategy and assist with implementing a sustainable security monitoring program for your EHR(s). We have experience doing this for Epic and Cerner among others and can help you leverage your existing security/privacy monitoring technologies or managed services investments.

Please leave us a message at +1 312-544-9625 or send us a note to RiskLCM@rnc2.com if you would like to discuss further.

The OPM breach has been deservedly in the news for over a month now. Much has been written and said about it across the mainstream media and the internet1.

I want to focus here on a topic that hasn’t necessarily been discussed in public, perhaps not at all – Could the OIG (and their audit reports) have done more or different than what they did, year after year of issuing the reports? Specifically, how could these audit reports have driven some urgently needed attention to the higher risks and perhaps helped prevent the breach?

Let us look at the latest OIG’s Federal Information Security Management Act (FISMA) audit report issued in November 2014 (pdf) as a case in point. The report runs over 60 pages and looks to have been a reasonably good effort in meeting its objective of covering the general state of compliance with FISMA. However, I am not sure the report is any useful for “real world” risk management purposes at an agency that had known organizational constraints in availability of appropriate security people or resources; an agency that should have had some urgency in implementing certain safeguards on at least the one or two critical system(s) for the nature and quantity of sensitive information they had.

I also believe providing a list of findings in the Executive Summary (on page 2) was a wasted opportunity. Instead of providing a list of compliance or controls gaps, the summary should have included specific call-to-action statements by articulating the higher risks and providing actionable recommendations for what the OPM could have done over the following months in a prioritized fashion.

Here then are my recommended takeaways:

1. If you are an auditor performing an audit or a consultant performing a security assessment, you might want to emphasize “real” risks, as opposed to compliance or controls gaps that may be merely academic in many cases. Recognize that evaluation and articulation of risks require a more complete understanding of the business, technology and regulatory circumstances as compared to what you might need to know if you were merely writing gaps against certain controls or compliance requirements.

2. Consider the organizational realities or constraints and think about creative options for risk management. Always recommend feasible quick-wins in risk mitigation and actionable prioritization of longer term tasks.

3. Do not hesitate to bring in or engage with specialists if you aren’t sure you can evaluate or articulate risks and recommend mitigation tasks well enough. Engage with relevant stakeholders that would be responsible for risk mitigation to make sure they are realistically able to implement your recommendations, at least the ones that you recommend for implementation before your next audit or assessment.

In closing, I would strongly emphasize a focus on meaningful risk management outcomes, not just producing reports or deliverables. A great looking deliverable that doesn’t convey the relative levels of real risks and the urgency of mitigating certain higher risks is not going to serve any meaningful purpose.

References for additional reading

1. At the time of this writing, I found these two links to be useful reading for substantive information on the OPM breach.

2. You may also be interested in a quick read of our recommendations for agile approaches to security/privacy/compliance risk assessments or management. A pdf of our slide deck will be emailed to you after a quick registration here.

Since I follow the teleheath space rather closely from a security/privacy perspective, I was drawn yesterday to this article titled “How Health Privacy Regulations Hinder Telehealth Adoption”. From my experience, I know telehealth has many obstacles to overcome but I have never thought of security-privacy being prominent among them. I have certainly not thought of security-privacy as a hindrance to its adoption, as the article’s title says.

I read the article and then downloaded the original AHA paper (pdf) the article is based on.

It wasn’t too long after did I conclude that the title of the article was misplaced, in my opinion.

The AHA paper is nicely written and very objective in my view. It covers a number of areas that are true challenges to telehealth adoption but it doesn’t portray security-privacy as a hindrance, contrary to the title of the article. On the other hand, it talks about specific security-privacy considerations for planning and implementation (see page 10 of the pdf). These considerations are no different from what one would need to implement when deploying any new type of technology.

The considerations are the right things to do if you were to have any confidence in your ability to safeguard patient privacy and safety. Sure, there are some regulatory aspects (discussed on page 11) but these are no different from what we need for protecting Protected Health Information (PHI) in any form.

In conclusion, I think the author should perhaps look to change the title lest anyone should think that it adds to the FUD, of which there is no shortage in security, as we know.

Over this period, the main stream media and many of the bloggers and commentators, as usual, have been all over it. Many have resorted to some not-so-well-thought-out (at least in my opinion as well as a couple of others1 ) statements such as “encryption” could have prevented it. Some have even faulted HIPAA for not mandating encryption of data-at-rest2.

Amidst all this, I believe there has been some good reporting as well, albeit very few. I am going to point to a couple of articles by Steve Ragan at CSOOnline.com here and here.

I provide an analysis here of perhaps how Anthem could have detected and stopped the breach before the data was exfiltrated. This is based on the assumption that the information published in Steve Ragan’s articles is accurate.

Let’s start with some known information then:

“Anthem, based on data posted to LinkedIn and job listings, uses TeraData for data warehousing, which is a robust platform that’s able to work with a number of enterprise applications”. Quoted from here.

“According to a memo from Anthem to its clients, the earliest signs of questionable database activity date back to December 10, 2014”. Quoted from here.

“On January 27, 2015, an Anthem associate, a database administrator, discovered suspicious activity – a database query running using the associate’s logon information. He had not initiated the query and immediately stopped the query and alerted Anthem’s Information Security department. It was also discovered the logon information for additional database administrators had been compromised.” Quoted from the same article as above.

I went over to the Teradata site to download their Security Administration guide of Release 13 of Teradata Database (download link). I downloaded the guide for an older version from November 2009. I am assuming Anthem is using Release 13 or later and so, isn’t missing the features I am looking at.

Database logging can be challenging sometimes and depending on the features available in your database, the logging configurations can generate a lot of noise. This in turn, may make it difficult to detect events of interest. I wanted to make sure there weren’t such issues in this case.

It turns out TeraData is fairly good in its logging capabilities. Based on the highlighted content, it appears one should be able to configure log generation specifically for a DBA performing a SELECT query on a table containing sensitive data.

=======================================================

=======================================================

There should not ordinarily be a reason for a DBA to query for sensitive data so this should have been identified as a high risk alert use-case by their Logging and Monitoring program.

I am assuming Anthem also has a Security Information and Event Management (SIEM) solution that they use for security event monitoring. Even a garden variety SIEM solution should be able to collect these logs and raise an immediate alert considering the “high risk” nature of a DBA trying to query for sensitive data.

This alert should have gone to someone that is accountable or responsible for security incident response. It appears that didn’t happen. This is symptomatic of a lack of “ownership” and “accountability” culture, in my view. For a case of this nature, I strongly recommend the IT owner (e.g. Manager or Director of the Database Team) being on point to receive such alerts involving sensitive data. Your Security Operations folks may not necessarily know the context of the query and therefore the high risk nature of it. I talked about this in a guest post last month. See the last dot point in this post.

As quoted at #3 above, it appears one of the DBAs discovered someone using his/her credentials and running that query. You certainly don’t want to leave it to the DBA to monitor his own actions. If this was a malicious DBA, we might be talking about a major breach caused by an insider and not a Advanced Persistent Threat (APT) actor as the Anthem breach appears to be. But then, I digress.

If the high risk anomalous DBA activity had been discovered immediately through the alert and if appropriate incident response steps had been initiated, it is possible that Anthem may have been able to stop the breach before the APT actor took the data out of the Anthem network.

So, if we come to think of it, some simple steps of due diligence in establishing security and governance practices might have helped avoid a lot of pain to Anthem not to mention a lifetime of hurt to the people and families impacted by the breach.

Here then are some take-aways if you would like to review your security program and want to make some changes:

You may not need that shiny object. As explained above, an average SIEM solution can raise such an alert. We certainly don’t need that ‘big data” “analytics” solution costing 100s of thousands or millions of dollars.

Clarity in objectives3. Define your needs and use cases before you ever think of a tool or technology. Even if we had a fancy technology, it would be no use if we didn’t identify the high risk use-case for monitoring the DBA activity and implement an alert for it.

Process definition and speed of incident response. People and process aspects are just as important (if not more than) as the technology itself. Unfortunately, we have too many instances of expensive technologies not being effective because we didn’t design and implement the associated people/process workflows for security monitoring and timely incident response.

Ownership and accountability. I talked about this topic last month with a fair amount of detail and examples. While our Security Operations teams have their job to do in specific cases, I believe that the IT leadership must be “accountable” for security of the data collected, received, processed, stored or transmitted by their respective systems. In the absence of such an accountability and ownership culture, our security monitoring and response programs will likely not be effective.

Focus on quick wins. If we look at our environments with inquisitive eyes and ears, most of us will likely identify quick wins for risk reduction. By quick wins, I am referring to actions for reducing higher risk levels that we can accomplish in weeks rather than months, without deploying a lot of resources. Not all of our risk management action plans have to necessarily be driven by formal projects. In the context of this Anthem example, it should be a quick win to implement an alert and have the database manager begin to watch for these alerts.

Don’t accept pedestrian risk assessments and management. If you go back and look at your last risk assessment involving a sensitive database for example, what risks were identified? What were the recommendations? Were these recommendations actionable or some “template” statements? Did the recommendations identify quick-win risk reduction opportunities? Did you follow through to implement the quick wins? In other words, the quality of a risk assessment must be solely determined by the risk reduction opportunities that you were able to identify and the outcomes you were able to accomplish within a reasonable period of time. The quality of paper deliverables, methodology etc. are not nearly as important, not to say that they don’t matter.

Stay away from heavy weight security frameworks. We talked about this last year. I’ll probably have more to say about it in another post. Using #AnthemHack as an example, I plan to illustrate how a particular leading security framework wouldn’t be very helpful. In fact, I believe that using heavy weight security frameworks can be detrimental to most security programs. They take a lot of time and precious resources not to mention the focus away from accomplishing risk reduction outcomes that truly matter.

Effective governance and leadership. Last but not the least, the need for leadership and governance should come as no surprise. None of the previous items on this list can be truly accomplished without an emphasis on governance and leadership starting right at the board level and across the executive leadership.

I hope the analysis and recommendations are useful to you.

Remember, while the techniques employed by APT actors may be advanced and persistent, the vulnerabilities they exploit are often there only because we didn’t do some basic things right or perhaps we made it too hard and complicated on ourselves to do it right.

HealthcareITNews reported yesterday on this letter that was written by several physician organizations to the ONC.

I wanted to write a couple of quick thoughts on the security aspects raised in the letter. I highlighted relevant parts on pages 1 and 2 of the letter with annotations #1, #2 and #3.

Here then are my thoughts on the three items…

#1

We agree with this point. We have talked about our security related concerns around the EHR Certification process and the Meaningful Use program previously. Here and here are a couple of posts for example.

The first link has our commentary we published on the OIG report being referred to in the letter.

The second linked post on Patient Portals has specific details of our thoughts on the security criteria in the MU and Certification programs. We also discussed specific due diligence recommendations for providers. These recommendations should also apply to Electronic Health Records (EHRs) for the most part.

#2 and #3

These two paragraphs in the letter speak to the Identity and Access Management (IAM) related concerns, in particular around stronger authentication and usability.

We couldn’t agree more on these points. I am also glad the letter highlights the need for strong authentication.

It is no secret that IAM programs in general haven’t lived up to the promise and expectations. Healthcare provider settings in particular provide specific challenges, primarily because of the need for IAM to really be “transparent” and support clinical workflows seamlessly. We know this continues to be a challenge at most healthcare provider organizations. The point being made in the letter should come as no surprise to anyone.

In our view, an effective solution to this problem requires the IAM/HealthIT product vendors as well as IAM/Security consultants to “up” the game.

And then, healthcare providers (especially the larger ones who have the power and influence to move their vendors to act) have an important role to play in bringing the IAM and HealthIT vendors to the table so we have viable technology options available to us. We first talked about it at this webinar back in 2013, but I don’t think we are anywhere close to seeing viable technology options yet in leading vendor solutions.

In summary, I think these security related arguments being made in the letter are very valid. However, I am not sure how much ONC can do to move us forward. At best, I think the ONC can only “take the horse to the water” as it were. I really think we need both the IAM and HealthIT vendors to step up and collaborate actively to deliver viable solutions. And the healthcare providers need to push the vendors to do it.

I hope this has been a helpful read. Please don’t hesitate to leave your thoughts below, good or bad.

Like many other Health IT initiatives today, the primary driver for patient portals is regulatory in nature. Specifically, it is the Meaningful Use requirements related to view, download or transmit and secure messaging. However, the biggest long term benefit of the portals might be what they can do for patient engagement and as a result, to the providers’ business in the increasingly competitive and complex healthcare marketplace in the United States.

The objective of this post is to discuss the security aspects of patient portals, specifically, why the current practices in implementing these portals could pose a big problem for many providers. More importantly, we’ll discuss specific recommendations for due diligence actions that the providers should take immediately as well as in the longer term.

Before we get to discuss the security aspects, I think it is important to “set the stage” by discussing some background on patient portals. Accordingly, this post covers the following areas in the indicated sequence:

1. What are patient portals and what features do they (or could) provide?

2. Importance of patient engagement and the role of patient portals in patient engagement

3. The problem with the current state in Health IT and hence the risks that the portals bring

4. Why relying on regulations or vendors is a recipe for certain failure?

5. What can/should we do (right now and in the future) – Our recommendations

1. What are Patient Portals and what features do they (or could) provide?

A patient portal is a secure online website that gives patients convenient 24-hour access to personal health information from anywhere with an Internet connection. Using a secure username and password, patients can view health information such as:

• Recent doctor visits

• Discharge summaries

• Medications

• Immunizations

• Allergies

• Lab results

Some patient portals also allow patients to:

• Exchange secure e-mail with their health care teams

• Request prescription refills

• Schedule non-urgent appointments

• Check benefits and coverage

• Update contact information

• Make payments

• Download and complete forms

• View educational materials

The bottom-line is that patient portals provide means for patients to access or post sensitive health or payment information. In the future, their use could expand further to include integration with mobile health applications (mHealth) and wearables. Last week’s news from EPIC should provide a sense for things to come.

2. Importance of patient engagement and the role of patient portals in patient engagement

As we said above, the primary driver for patient portals so far has been the Meaningful Use requirements related to view, download or transmit and secure messaging. However, the biggest long term benefit of the portals might be what they can do for patient engagement and becoming a key business enabler for providers.

The portals are indeed a leading way for providers to engage with patients, as can be seen in this graphic from the 2014 Healthcare IT Priorities published by InformationWeek1.

Effective patient engagement of course can bring tremendous business benefits, efficiencies and competitive edge to providers.

From a patient’s perspective, the portals can offer a easier method for interacting with their providers which in turn has its own benefits for patients. To quote from the recently released HIMSS report titled “The State of Patient Engagement and Health IT”2

A patient’s greater engagement in health care contributes to improved health outcomes, and information technologies can support engagement.

In essence, the importance of patient portals as a strategic business and technology solution for healthcare providers doesn’t need too much emphasis.

3. The problem with the current state in Health IT and hence the risks that the portals bring

In my view, the below quote from the cover page of the 2014 Healthcare IT Priorities Report published by InformationWeek1 pretty much sums it up for this section.

Regulatory requirements have gone from high priority to the only priority for healthcare IT.

4. Why relying on regulators or vendors is a recipe for certain failure of your security program?

It is probably safe to say that security in design and implementation is perhaps not the uppermost concern that HealthIT vendors have (certainly not the patient portal vendors in my opinion) today. To make it easy for them, we have lackluster security/privacy requirements in the regulation for certifying Electronic Health Records.

If you are a diligent provider, you will want to make sure that the vendor has met the above requirements even though the certification criteria do not include them. The reality though may be different. In my experience, providers often do not perform all the necessary due diligence before purchasing the products.

And then, when providers implement these products and attest to Meaningful Use, they are expected to do a security risk analysis (see the green highlighted requirement in the pdf). In my experience again, risk analysis is not performed in all cases. Of those that perform them, many are not really risk assessments.

The bottom-line? … Many providers take huge risks in going live with their patient portals that are neither secure nor compliant (Not compliant because they didn’t perform a true risk analysis and mitigate the risks appropriately).

If you look again (in 1 above) at the types of information patient portals handle, it is not far-fetched to say that many providers may have security breaches waiting to happen. It is even possible that some have already been breached but they don’t know yet.

Considering that patient portals are often gateways to the more lucrative (from a hacker’s standpoint) EHR systems, intruders may be taking their time to escalate their privileges and move laterally to other systems once they have a foothold in the patient portals. Considering that detecting intrusions is very often the achilles heel of even the most well-funded and sophisticated organizations, it should be a cause for concern at many providers.

5. What can/should we do (right now and in the future) – Our recommendations

It is time to talk about what really matters and some tangible next steps …

What can or must we do immediately and in the future?

Below are our recommendations for immediate action:

a) If you didn’t do the due diligence during procurement of your patient portal product, you may want to ask the vendor for the following:

· Application security testing (static and dynamic) and remediation reports of the product version you are using

· Penetration testing results and remediation status

· If the portal doesn’t provide risk based strong (or adaptive) authentication for patient and provider access, you may insist on the vendor committing to include that as a feature in the next release.

c) Ask the vendor for application security (static and dynamic) and pen test results for every release.

d) Segment the patient portal appropriately from the rest of your environment (also a foundational prerequisite for PCI DSS scope reduction if you are processing payments with credit/debit cards).

e) Perform your own external/internal pen tests every year and scans every quarter (Note : If you are processing payments with your payment portal, the portal is likely in scope for PCI DSS. PCI DSS requires you to do this anyway).

f) Conduct security risk assessments every year or upon a major change (This also happens to be a compliance requirement from three different regulations that will apply to the patient portal – HIPAA Security Rule, Meaningful Use and PCI DSS, if processing payments using credit/debit cards).

g) If you use any open source modules either by yourself or within the vendor product, make sure to apply timely patches on them as they are released.

h) Make sure all open source binaries are security tested before they are used to begin with.

i) If the vendor can’t provide support for strong authentication, look at your own options for proving risk based authentication to consumers. In the meanwhile, review your password (including password creation, reset steps etc.) to make sure they are not too burdensome on consumers and yet are secure enough.

j) Another recommended option is to allow users to authenticate using an external identity (e.g. Google, Facebook etc. using OpenID Connect or similar mechanisms) which may actually be preferable from the user’s standpoint as they don’t have to remember a separate log-in credential for access to the portal. Just make sure to strongly recommend that they use 2 step verification that almost all of these social media sites provide today.

k) Implement robust logging and monitoring in the patient portal environment (Hint : Logging and Monitoring is not necessarily about implementing just a “fancy” technology solution. There is more to it that we’ll cover in a future post)

In summary, there is just too much at stake for providers and patients alike to let the status quo of security in patient portals continue the way it is. We recommend all providers take priority action considering the lasting and serious business consequences that could result from a potential breach.

As always, we welcome your thoughts, comments or critique. Please post them below.

This is a detailed follow-up to the quick post I wrote the Friday before the Labor Day weekend, based on my read at the time of the PCI SSC’s Special Interest Group paper on “Best practices for maintaining PCI DSS compliance”1 published just the day before.

The best practices guidance is by and large a good one though nothing of what is discussed is necessarily new or ground breaking. The bottom line of what the paper discusses is the reality of what any person or organization with electronic information of some value (and who doesn’t today?) needs to do… which is that there is no substitute for constant and appropriate security vigilance in today’s digital world.

That said, I am not sure this guidance (or anything else PCI SSC has done so far with PCI DSS including the new version 3 taking effect at the end of the year) is going to result in the change we need… the change in how PCI organizations are able to prevent or at least able to detect and contain the damage caused by security breaches in their cardholder data environments (CDEs). After all, we have had more PCI breaches (both in number and scale) over the past year than at any other time since PCI DSS has been in effect.

One is then naturally forced to question why or how does PCI SSC expect a different result if PCI DSS itself hasn’t changed fundamentally over the years. I believe a famous person no less than Albert Einstein had something to say about doing the same thing over and over again and expecting different results.

If you have had anything to do with the PCI DSS over the last several years, you are probably very familiar with the criticism it has received from time to time. For the record, I think PCI DSS has been a good thing for the industry and it isn’t too hard to recognize that security in PCI could be much worse without the DSS.

At the same time, it is also not hard to see that PCI DSS hasn’t fundamentally changed in its philosophy and approach since its inception in 2006 while the security threat environment itself has evolved drastically both in its nature and scale over this period.

The objective of this post is to offer some suggestions for how to make PCI DSS more effective and meaningful for the amount of money and overheads that merchants and service providers are having to spend on it year after year.

Suggestion #1 : Call for Requirement Zero

I am glad the best practices guidance1 highlights the need for a risk based PCI DSS program. It is also pertinent to note that risk assessment is included as a milestone 1 item in the Prioritized Approach tool2 though I doubt many organizations use the suggested prioritization.

In my opinion however, you are not emphasizing the need for a risk based program if your risk assessment requirement is buried inconspicuously under requirement #12 of the 12 requirements (12.2 to be specific). If we are to direct merchants and service providers to execute a risk based PCI DSS program, I believe the best way to do it is by making risk assessment the very first thing that they do soon after identifying and finalizing the CDE they want to live with.

As such, I recommend introducing a new Requirement Zero to include the following :

Identify the current CDE and try to reduce the CDE footprint to the extent possible

Update the inventory of system components in the CDE (Current requirement 2.4)

Prepare CDE Network diagram (Current requirement 1.1.2) and CHD flow diagram (Current requirement 1.1.3). I consider this to be a critical step. After all, we can only safeguard something valuable if we know it exists. We also talked about how the HIPAA Security Rule could use this requirement in a different post.

Conduct a Risk Assessment (Current requirement 12.2)

Performing a risk assessment right at the beginning will provide the means for organizations to evaluate how far they need to go with implementing each of the 200+ requirements. In many cases, they may have to go well over the letter of certain requirements and truly address the intent and spirit of the requirements in order to reduce the estimated risk to acceptable levels.

Performing the risk assessment will also (hopefully) force organizations to consider the current and evolving threats and mitigate the risks posed by these threats. Without the risk assessment being performed upfront, one will naturally fall into the template security mindset we discussed here. As discussed in the post, template approaches are likely to drive a security program to failure down the road (or at least make it ineffective).

Suggestion #2 : Discontinue all (requirements) or nothing approach

A true risk management program must mean that the organizations should have a choice not to implement a control if they can clearly articulate the risk associated with not implementing it is truly low.

I think PCI DSS has a fundamental contradiction in its philosophy of pushing a all-or-nothing regulation while advocating a risk based approach at the same time. In an ideal world where organizations have limitless resources and time at their disposal, they could perhaps fully meet every one of the 200+ requirements while also addressing the present and evolving risks. As we know however, the real world is far from ideal in that the organizations are almost always faced with constraints all around and certainly with the amount of resources and time available at their disposal.

Making this change (from all or nothing approach) of course will mean a foundational change in PCI DSS’ philosophy of how the whole program is administered by PCI SSC and the card brands. Regardless, this change is too important to be ignored considering the realities of business challenges and the security landscape.

Suggestion #3 : Compensating controls

As anyone that has dealt with PCI DSS knows, documentation of compensating controls is one of the most onerous aspects of PCI DSS, so much so that you are sometimes better off implementing the original control than having to document and justify the “validity” of the compensating control to your QSA. No wonder then, that a book on PCI DSS compliance actually had a whole chapter on the “art of compensating control”.

The need for compensating controls should be based on the risk to the cardholder data and not on not implementing the requirement itself. This should be a no-brainer if PCI SSC really wants PCI DSS to be risk based.

If the risk associated with not implementing a control is low enough, organizations should have a choice of not implementing a compensating control or at least not implementing it to the extent that the DSS currently expects the organization to.

Suggestion #4 : Reducing compliance burden and fatigue

As is well known, PCI DSS requires substantial annual efforts and related expenses. If the assessments involve Qualified Security Assessors (QSAs), the overheads are much higher than self-assessments. Despite such onerous efforts and overheads, even some of the more prominent retailers and well-funded organizations can’t detect their own breaches.

The reality is that most PCI organizations have limited budgets to spend on security let alone on compliance with PCI DSS. Forcing these organizations to divert much of their security funding to repeated annual compliance efforts simply doesn’t make any business or practical sense, especially considering the big question of whether these annual compliance efforts really help improve the ability of organizations to do better against breaches.

I would like to suggest the following changes for reducing compliance burden so that organizations can spend more of their security budgets on initiatives and activities that can truly reduce the risk of breaches:

The full scope (of all 200+ requirements or controls) may be in scope for compliance assessments (internal or by QSA) only during the first year of the three year PCI DSS update cycle. Remember that organizations may still choose not to implement certain controls based on the results of the risk assessment (see suggestion #2 above)

For the remaining two years, organizations may be required to perform only a risk assessment and implement appropriate changes in their environment to address the increased risk levels. Risk assessments must be performed appropriately and with the right level of due diligence. The assessment must include (among other things) review of certain key information obtained through firewall reviews (requirement 1.1.7), application security testing (requirement 6. 6), access reviews (requirement 7), vulnerability scans (11.2) and penetration tests (11.3).

Suggestion #5 : Redundant (or less relevant) controls

PCI SSC may look at reviewing the value of certain control requirements considering that newer requirements added in subsequent versions could reduce the usefulness or relevance of those controls or perhaps even make them redundant.

For example, PCI DSS v3 requirement around penetration testing has a considerable change compared to the previous version. If the organization were to perform the penetration tests appropriately, there should not be much need for requirement 2.1 especially the rather elaborate testing procedures highlighted in the figure.

There are several other requirements or controls as well that perhaps fall into the same category of being less useful or even redundant.

Such redundant requirements should help make the case for deprecation or consolidation of certain requirements. These requirements also help make the case for moving away from the all or nothing approach or philosophy we discussed under #2.

Suggestion #6 : Reduce Documentation Requirements

PCI DSS in general requires fairly extensive documentation at all levels. We already talked about it when we discussed the topic of compensating controls above.

Documentation is certainly useful and indeed strongly recommended in certain areas especially where it helps with communication and better enforcement of security controls that help in risk reduction.

On the other hand, documentation purely for compliance purposes must be avoidable especially if it doesn’t help improve security safeguards to any appreciable extent.

…………………………………………………

That was perhaps a longer post than some of us are used to, especially on a blog. These are the suggestions that I can readily think of. I’ll be keen to hear any other suggestions you may have yourself or perhaps even comments or critique of my thoughts.

In most cases, better security posture is all about getting a few basics right. And this recent incident related to the breach of a Healthcare.gov server may be further proof of that.

Based on this article from csoonline, it appears the problem may have been that the “development server was poorly configured and used default credentials”.

At the same time, the article says that “the website undergoes quarterly security audits, as well as daily security scans and hacking exercises”. I am guessing then that the development server wasn’t included in the “hacking exercises” which I am assuming are penetration tests performed the way they should be.

Many times, you might be ok not to have your development environment undergo a full pen test especially when you are sure that you have the security basics right, like not using the default credentials and configuration in this case. However, when you are as “prominent” as Healthcare.gov is for a number of reasons we all know, the elevated risk profile should require that we perform the necessary due diligence at least once upon installation or major change.

Again, we don’t know all the details but based on what is being reported, this incident adds to the proof that better security is mostly about basics. However, as we know from experience, basics don’t always mean easy because there is this thing called execution which many organizations are not effective at. As they say, talk is cheap.

Just to be sure, we are not necessarily referring to the need for money or funding (though that may be a problem for some organizations). Healthcare.gov is again a good example in this context because I doubt they have any problem with funding for security. Considering the hiccups they had during the initial months of their launch, I suspect they don’t want to be in the news for anything except to announce good enrollment numbers, let alone a security breach.

Executing the basics in security takes a high standard of professional due diligence by the individuals or teams involved in planning and running the security program. Implementing sophisticated technologies or hiring expensive consultants is not going to be very useful if the foundational aspects are not effective.

Image courtesy : lovethispic.com

Notes

I used healthcare.gov as an example since the incident was in the news this week. I think they are also a good example to illustrate the fact that the best of funding, technologies or consulting resources can still not assure that you will not have a security breach.

Regardless of the breach (which appears not to have been damaging since no personal information was taken), one must note the fact that they probably did a good job in noticing anomalies on a development server. Considering that many organizations can’t detect breaches in time or at all even in their production environments (see our posts here and here), one might think that the healthcare.gov team has probably done a better job. We’ll probably learn more details in the coming days but it appears the circumstances and the consequences weren’t too bad.

You had me on board until I saw this statement in your guidance1 released yesterday.

“However, using risk as the basis for an organization’s information security program does not permit organizations to avoid or bypass applicable PCI DSS requirements or related compensating controls. In order to achieve compliance with PCI DSS, an organization must meet all applicable PCI DSS requirements.”

I believe we need a change in your “all requirements mandatory” approach. I think it leads to compliance fatigue and misguided spend of already limited security budgets.

Almost all Payment Card Industry (PCI) breaches over the past year, including the most recent one at Supervalu appear to have the following aspects in common:

1. They involved some compromise of Point of Sale (POS) systems.

2. The compromise and breaches continued for several weeks or months before being detected.

3. The breaches were detected not by the retailer but by some external entity – FBI, the US Secret Service, Payment processor, card brands, issuing bank etc.

4. At the time the breach was disclosed, the retailers appear to have had a passing PCI DSS certification.

Anyone that has a reasonable understanding of the current Information Security landscape should know that it is not a matter of “if” but “when” an organization will get compromised. Given this humbling reality, it only makes sense that we must be able to detect a compromise in a “timely” manner and hopefully contain the magnitude of the breach before it gets much worse.

Let’s consider the following aspects as well:

PCI has one of the more prescriptive regulations in the form of PCI DSS and PA DSS than any other industry. As a case in point, consider the equivalent regulations for Electronic Health Records systems (EHRs) in the United States – the EHR Certification regulation (PA DSS equivalent) requirements highlighted yellow in this document and the Meaningful Use regulation (PCI DSS equivalent) requirements highlighted green. You will see that the PCI regulations are a lot more comprehensive both in breadth and depth.

PCI DSS requires merchants and service providers to validate and document their compliance status every year. For the large retailers that have been in the news for the wrong reasons, this probably meant having a external Qualified Security Assessor (QSA) performing a on-site security assessment and providing them with a passing Report on Compliance (ROC) every year.

As for logging and monitoring requirements that should help with detection of a potential compromise, both PCI DSS (Requirement 10) and PA DSS (Requirement 4) are as detailed as they get in any security framework or regulation I am aware of.

Even if you think requirement #10 can’t help detect POS malware activity, there is PCI DSS requirement 12.2 that requires a security risk assessment to be performed at least once a year. The risk assessment must consider the current threats and vulnerabilities. Given the constant stream of breaches, one would think that the POS malware threats are accounted for in these risk assessments.

These large merchants have been around for a while and are supposed to have been PCI DSS compliant for several years. And so, one would think they have appropriate technologies and processes to at least detect a security compromise that results in the scale of breaches they have had.

So, what do you think may be the reasons why the retailers or the PCI regulations are not effective in at least detecting the breaches? More importantly, what changes would you suggest, both to the regulations and also to how the retailers plan and execute their security programs? Or perhaps even to how the QSAs perform their assessments in providing passing ROCs to the retailers?