Keeping Up With the Future: Risk Management for Rapid Technology Adoption

That rumbling you hear? It's the pace of change quickening for organizations thanks to advances in technology. That pace is having an impact on risk profiles; companies no longer have the luxury of time to adapt to new paradigms or threats. Fortunately there are ways to mitigate the changes so that organizations don't have to feel like they're being steamrolled by the future.

By Ed Moyle
02/22/13 5:00 AM PT

Everyone knows that protecting an organization's technology footprint has always been a delicate balancing act. Nowadays, literally everything about a given organization's technology portfolio is in a near-constant state of change; technologies change, usage changes and the threat landscape changes. Changes come at higher frequency -- and at increasing scale.

Now more than ever, there is no status quo in technology. This presents a bit of a quandary for organizations that wish to approach technology risk in a systematic and structured way. Why? Because how do you follow a systematic and structured process for something changing in an unstructured, unpredictable way?

Risk management has always been based on assumptions, which act as input into the organizational risk equation. If assumptions change, it means the risk profile of the organization changes with it. The faster they change, the more volatile the risk environment becomes (i.e., the more it fluctuates and changes over a given time window), and the more fluid response planning needs to be.

This begs the question about the utility of a traditional cyclical approach to risk. Specifically, does a cyclical risk management process make sense in light of the current pace of change? It can be a challenging question to unpack -- and has some serious ramifications for organizational risk management efforts.

The Pace of Change Matters

To illustrate what I'm getting at, consider a firm that employs a "point-in-time" risk assessment and response methodology. The organization might conduct an annual risk assessment of the technology environment, it might evaluate -- whether qualitatively or quantitatively -- its current threat landscape and risk, and might select additional controls (or decommission old ones) based on the output of that analysis. Sound familiar? That's exactly the point.

Historically, the pace of change was slow enough that an approach like this one was not only workable but preferable. Technical limitations gated the pace of technology change, while business processes gated the pace of control selection and response planning (for example, the organizational budget cycle.) Since you can't just spend money willy-nilly, and technology changes took years to take effect, what point is there in analyzing risk more frequently than changes occurred -- or more quickly than you can respond to what you found?

Those barriers that gated the frequency of change, however, are now coming down.

Consider virtualization as an example of changes to technology deployment. In a pre-virtualization world, the degree to which new platforms can be fielded is limited by barriers to deploying new physical hardware -- often a lengthy process. As the organization adopts virtualization, requirements for physical infrastructure no longer limit the pace of deployment. Instead, limitations shift to storage and processing power of the hypervisors.

As the organization moves post-virtualization -- through the use of IaaS, for example -- limiting factors change yet again; they become budgetary in nature based on the cost of consumption, rather than technology-based on either physical deployment or technical capacity.

On the business side, process-related limitations that have made mitigation actions take longer are also being reduced. We still have a cyclical annual budget that limits to some degree technical controls we can deploy, but now controls arise from other avenues as well.

For example, changes at external service providers impact the control environment when those external parties are heavily leveraged. Instead of needing to deploy large, monolithic security tools according to an annual budget cycle, changes can be fluid if a cloud service provider adds or subtracts from their control portfolio. And enterprises have a say in these changes; either through influencing them directly through methods such as contractual pressure, or they can happen via pressures brought from other customers.

Updating the Model

At some point in time, things can happen faster. Because they can, they do. This means that a heavy, cyclical process for risk management might not be fluid enough to keep up. So what is an organization to do? How can we ensure that we still systematically manage risk, but do so in a way that stays relevant in light of a more fluid business and technology climate?

There are two primary ways to accomplish this goal. The first is to adjust the frequency of evaluation and response; in other words, to make the process occur more frequently. This makes the analysis/mitigation cycle more granular (which is good), but also increases the time and resources involved (since you're doing the same thing, just more frequently.)

Just as personnel needed to conduct the assessment, complete the scoring, and make mitigation decisions under the old model, so too will they need to be available to perform those actions once it becomes more frequent. Enterprises may find that a lighter-weight methodology might help to ease the burden here, but frankly it can be challenging to make the process lighter-weight while still keeping the same level of rigor.

The second approach is to move toward a more continuous model of risk analysis; specifically, one that attempts to decouple analysis from any time-bound constraint through near real-time scoring of risk based on the most current data. One model that attempts to do this is described in the DHS Federal Network Security's Continuous Asset Evaluation, Situational Awareness, and Risk Scoring Reference Architecture (CAESARS)
Report, as well as additional supplemental extensions via NIST in
NISTIR 7756. While this material is fairly lengthy, this work is also quite literally game-changing as it relates to evaluating enterprise technical risk.

For organizations that find that model unapproachable or challenging to incorporate, another option might be employing a risk-based GRC tool (e.g. RSAM, Modulo Risk Manager, etc.) to help automate the collection of risk-relevant information and scoring. These products specialize in automating the risk review process and therefore help you get to a more continuous view of risk without (hopefully) the need to add additional staff.
The way that we approach risk is at a turning point. As we change our ways of doing business, we need to evaluate existing processes and decide if they make sense. If they do not, we need to change our approach to keep what we're doing relevant for the organization.

Ed Moyle is Director of Emerging Business and Technology for
ISACA. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development.