What gets measured gets done … why you must measure the effectiveness of your security program

A security program and it’s controls are a hypothesis put in place and evaluated within an organization based on a set of assumptions and expected value. This is a critical success factor in an information security compliance program.

The concept of testing the viability of a hypothesis is not new and one that is commonly missing within organization’s security compliance programs. Consider all the areas within the business where testing of hypothesis exists and the result are fed back into the development process. In some cases products may be dreamed-up; prototyped; tested; iterated, and perhaps shelved or launched. Software development (SDL) includes developing code, testing it against use cases, and continually evaluating it against performance requirements, customer acceptance criteria, security!! requirements, and of course regulatory considerations.

Organizations are not lacking in the ability of scientific method, metrics, performance testing, or hypotheses. The opportunity lies in establishing proper use cases as they relate to information security compliance, and rigorously challenging and tracking these policies, practices, and procedures against the real life result of such a deployment.

A few mythes to dispel:

Organizations can define metrics and KPI based on the root cause analysis and driver for a set of security program controls

Metrics and KPI should be tracked, challenged regularly, and brought to executive levels for acceptance of performance (an important element in driving definition of value with security programs to core business initiatives)

Controls do not beget controls

Technology need not beget more technology or safeguards

Sometimes there is no solution that is guaranteed, so transparency on performance, predictability, impact, cost, and residual risk are key factors for all involved

The takeaway’s here include at least the following considerations:

Identify why such policies, practices, and controls are deployed.

Determine the root cause these are solving.

Define the performance expected.

Measure that performance against the metric.

Is the the performance conforming to objectives.

Are the metrics appropriate for reaching the conclusion sought by the root cause and technology information available.

Can security compliance program elements be consolidated to address the root causes

Can efficiencies be gained by consolidating technology and safeguards

Are there architecture opportunities that can be considered

Are there business procedure changes that could better enable the business activity and directly improve the overall state of the business

There are numerous additional considerations, but as in all enhancements – focus on a small set of tasks and iterate. Through a few cycles efficiencies will be gained internally, and the practices will begin to transform to reflect the culture and operating habits of the business. A word of caution though, don’t elongate the process. Once a method is established and advantages realized, scale rapidly to high impact areas (definition may be based upon user impact; risk impact; dollar to revenue served, etc..)

The thoughts here are based on personal experience building and designing global security programs. Some elements described may need customization in approach and process based on your own organization’s structure.

James, thank you for sharing your work. My one take away from working with metrics this year is that many organizations fear doing metrics because they do not believe that they are mature enough to have good metrics (due to not having good sources of data, lack of data completeness or accuracy etc.). However, metrics do not need to be perfect. While metrics based on incomplete/inaccurate data may not be an absolute accurate representation of their environment, metrics based on existing data would at least be somewhat representative of the general nature of an organization’s environment. In this case, in my opinion, I would rather have metrics, albeit imperfect, to aid in decision making and improvement efforts, than none at all. Once metrics have been established, organizations can work on improving the accuracy/completeness of the metrics.