Formal risk assessment methodologies try to take guesswork out of evaluating IT risks. Here is real-world feedback on four such frameworks: OCTAVE, FAIR, NIST RMF, and TARA.

Assessing and managing risk is a high priority for many organizations, and given the turbulent state of information security vulnerabilities and the need to be compliant with so many regulations, it's a huge challenge.

Several formal IT risk-assessment frameworks have emerged over the years to help guide security and risk executives through the process. These include:

OCTAVE

OCTAVE (Operationally Critical Threat, Asset and Vulnerability Evaluation), developed at the CERT Coordination Center at Carnegie Mellon University, is a suite of tools, techniques and methods for risk-based infosec strategic assessment and planning.

OCTAVE defines assets as including people, hardware, software, information and systems. There are three models, including the original, which CERT says forms the basis for the OCTAVE body of knowledge and is aimed at organizations with 300 or more employees; OCTAVE-S, similar to the original but aimed at companies with limited security and risk-management resources; and OCTAVE-Allegro, a streamlined approach to information security assessment and assurance.

The framework is founded on the OCTAVE criteria—a standardized approach to a risk-driven and practice-based information security evaluation. These criteria establish the fundamental principles and attributes of risk management.

The OCTAVE methods have several key characteristics. One is that they're self-directed: Small teams of personnel across business units and IT work together to address the security needs of the organization. Another is that they're designed to be flexible. Each method can be customized to address an organization's particular risk environment, security needs and level of skill. A third is that OCTAVE aims to move organizations toward an operational risk-based view of security and addresses technology in a business context.

Among the strengths of OCTAVE is that it's thorough and well documented, says Brooke Paul, managing director at Capital Informatics and former CSO at American Financial Group. "The people who put it together are very knowledgeable," says Paul, who has evaluated the framework for clients. "It's been around a while and is very well-defined and freely available."

Because the methodology is self-directed and easily modified, it can be used as the foundation risk-assessment component or process for other risk methodologies, says Ron Woerner, security systems analyst at HDR, an architectural and engineering firm. Woerner says he's used a hybrid of OCTAVE, FAIR and other methodologies.

"The original OCTAVE method uses a small analysis team encompassing members of IT and the business. This promotes collaboration on any found risks and provides business leaders [with] visibility into those risks," Woerner says. "To be successful, the risk assessment-and-management process must have collaboration."

In addition, OCTAVE "looks at all aspects of information security risk from physical, technical and people viewpoints," Woerner says. "If you take the time to learn the process, it can help you and your organization to better understand its assets, threats, vulnerabilities and risks. You can then make better decisions on how to handle those risks."

Experts say one of the drawbacks of OCTAVE is its complexity. "When it shipped, we spent hours trying to understand what it was that this package was going to do for us," says Adam Rice, global CSO and vice president of managed security services at Tata Communications, a provider of communications services.

There's a heavy reliance on practitioner intuition and experience, industry lore and best practices, Jones notes. While these are valuable, they don't consistently allow management to make effective, well-informed decisions.

FAIR is designed to address security practice weaknesses. The framework aims to allow organizations to speak the same language about risk; apply risk assessment to any object or asset; view organizational risk in total; defend or challenge risk determination using advanced analysis; and understand how time and money will affect the organization's security profile.

Components of the framework include a taxonomy for information risk, standardized nomenclature for information-risk terms, a framework for establishing data-collection criteria, measurement scales for risk factors, a computational engine for calculating risk and a model for analyzing complex risk scenarios.

Another plus is the common language used. The FAIR vernacular allows the IRM team and people from IT and the business lines to talk about risk in a consistent manner, Hayes says. "Ultimately, we want to be talking about exposure that any given finding poses to our company," he says. "The more business-focused that conversation is—especially when we are talking in terms of monetary exposure—the more meaningful the discussion becomes, which should facilitate more effective decision making."

Paul, who uses FAIR in his consulting practice as part of risk assessments for clients, says one of the advantages of the framework is that it doesn't use ordinal scales, such as one-to-10 rankings, and therefore "isn't subject to the limitations that go with ordinal scales," Paul says. "For example, 'high, medium and low' is an example of an ordinal scale, as is 'red, yellow and green' and 'one, two and three.'
We wouldn't begin to imagine that we can add or multiply two medium values, nor would we add or multiply yellow plus green. Yet we see many risk calculations in our industry that do exactly that when they use addition and/or multiplication with numeric ordinal scales."

FAIR uses dollar estimates for losses and probability values for threats and vulnerabilities. Combined with a range of values and levels of confidence, it allows for true mathematical modeling of loss exposures, Paul says.

Another plus is that FAIR has more detailed definitions of threats, vulnerabilities and risks, Paul says. "Most of the methodologies have definitions, but stop at that level," Paul says. FAIR has a taxonomy that breaks down the terms on a more granular level.

"The taxonomy enables us to describe more easily and credibly how we arrived at our conclusions," Paul says. "This is useful in demonstrating rigor and mitigating the prevailing impression that our profession doesn't understand risk or is basing recommendations on [FUD]."

As for downsides, FAIR can be difficult to use and it's not as well documented as OCTAVE, Paul says. "It's not as easy to get started; you can download a lot of information about OCTAVE," he says. "It's all very thoroughly put together and easy for you to get up and running. FAIR lacks that."

Hayes cites as a shortcoming of FAIR the lack of access to current information about the methodology and examples of how the methodology is applied. "Creative searching will generate some results, but the methodology itself still feels underground," he says.

Categorizing information systems and the information within those systems based on impact.

Selecting an initial set of security controls for the systems based on the Federal Information Processing Standards (FIPS) 199 security categorization and the minimum security requirements defined in FIPS 200.

Implementing security controls in the systems.

Assessing the security controls using appropriate methods and procedures to determine the extent to which the controls are implemented correctly, operating as intended and producing the desired outcomes with respect to meeting security requirements for the system.

Authorizing information systems operation based on a determination of the risk to organizational operations and assets, or to individuals resulting from the operation of the systems, and the decision that this risk is acceptable.

Monitoring and assessing selected security controls in information systems on a continuous basis, including documenting changes to the systems, conducting security-impact analyses of the associated changes, and reporting the security status of the systems to appropriate organizational officials on a regular basis.

"Not only is this framework valuable in assessing risks, it is invaluable in managing those risks," says Ruth Horaczko, practice leader of the risk assessment and IT division of Lyndon Group, an IT and business advisory consulting firm.

The primary strength of RMF is that it was developed by the NIST, which is charged by Congress with ensuring that security standards and tools "are researched, proven and developed to provide a high level of information security infrastructure," Horaczko says.

Because government agencies and the businesses that support them need their IT security standards and tools to be both cost-effective and highly adaptable, Horaczko says, the framework is constantly being reviewed and updated as new technology is developed and new laws are passed.

Furthermore, independent companies have developed tools that support the NIST standards, Horaczko says. "Knowing that the basis for applications is stable, software development companies are more willing to develop application tools to support the framework," she says.

Rice says Tata Communications uses the NIST framework in several lines of business and in its IT department to assess and manage risk. The model helps the company determine when something exceeds a certain threshold of risk.

"I think a strength is that the authors of [the RMF] were thinking along the right lines in [identifying] major factors that deter risk," Rice says. "We looked at many, and I think their approach is solid. The framework allows the company to easily determine which systems or applications present the highest risk if security breaches occur."

As for weaknesses, "like any of these frameworks, you have to make sure that the people who are doing the risk assessment have the discipline to put reasonable data into the model so you get reasonable data out," Rice says.

"Also, it's a document; it's not an automated tool," Rice says. "I'd like to have a tool we could incorporate that allows the process to be completely automated. That's something we'll probably develop over time, because I'm not sure there are any off-the-shelf tools."

Another weakness of RMF is its nomenclature, Horaczko says. "The use of acronyms throughout the framework and supporting tools is pervasive," she says.

By using a predictive framework to prioritize areas of concern, organizations can proactively target the most critical exposures and apply resources efficiently to achieve maximum results.

The TARA methodology identifies which threats pose the greatest risk, what they want to accomplish and the likely methods they will use. The methods are cross-referenced with existing vulnerabilities and controls to determine which areas are most exposed. The security strategy then focuses on these areas to minimize efforts while maximizing effect.

Intel says awareness of the most exposed areas allows the company to make better decisions about how to manage risks, which helps with balancing spending, preventing impacts and managing to an acceptable level of residual risk. The TARA methodology is designed to be readily adapted when a company faces changes in threats, computing environments, behaviors or vulnerabilities.

TARA relies on three main references to reach its predictive conclusions. One is Intel's threat agent library, which defines eight common threat agent attributes and identifies 22 threat agent archetypes. The second is its common exposure library, which enumerates known information security vulnerabilities and exposures at Intel. Several publicly available common exposure libraries are also used to provide additional data. The third is Intel's methods and objectives library, which lists known objectives of threat agents and the methods they are most likely to use to accomplish these goals.

Hayes says he's reviewed information about TARA that Intel has released. "What I really like about TARA is the threat agent view of risk," he says. "There are parts of TARA—the threat agent library and the methods and objectives library—that can be easily used within other risk-assessment methodologies, especially if there is a need to standardize on common threat agents and corresponding methods."

TARA "appears to be a good tool for identifying, predicting and prioritizing threats against your infrastructure," Woerner adds. "You can use it to create common libraries that can be shared among different groups."

The framework "focuses on threats rather than assets, [on] what bad things can happen," Woerner says. "This is both good and bad. By focusing on threats rather than asset value, an assessor may miss the mark in identifying true infrastructure risks. It also seems to make the assumption that the only way to view risk is from the perspective of 'What's the worst thing that could happen?'"

When he's conducting a risk assessment, Woerner asks two critical questions: What's the most likely threat against a specific critical asset and what's the biggest impact that could occur with the asset? "TARA only addresses the likelihood of threat events, but doesn't take into account the risk's impact," he says.

Paul says another drawback of the framework is that it's new and untested. "You don't hear a lot about people using" TARA, he says. "TARA also appears to be yet another qualitative methodology rather than one that can be used for quantitative analysis."