Today we unveiled ten best practices to help companies prepare for a new era of risk and regulatory oversight. In the New Era of Risk Management, companies will seek to integrate risk management silos from across the business and squeeze out additional efficiencies. This integration will reduce costs through a consolidated technology infrastructure and shared processes and will provide better transparency into the interdependencies of risks in the business.

OpenPages is working closely with its customers as they make the transition to a risk-based approach to managing their business. The ten best practices represent immediate actions OpenPages customers are taking and serve as guidance for others to ensure that their organizations are prepared to face new risk and regulatory demands. Check them out here. We also recently published a paper on the New Era of Risk Management.

Over the years, there have been many studies about CEO compensation and risk taking with the data on outcomes derived from data available from public companies. The latest salvo comes from two professors, one from BYU and the other Penn State, who have published a new paper in the current issue of The Academy of Management Journal. They studied 950 companies from 1994 through 2000 and found that CEOs who recieved more than half their compensation from stock options were more likely to undertake risky investments to deliver extreme company performance. The problem is that these investments were more likely to end up in big losses than big wins.

Floyd Norris in the New York Times describes the conclusion that the study’s authors come to. Namely, CEO pay should include deeply in the money options and longer holding periods so that the CEO acts more like shareholders.

This may or may not be a good idea. What’s clear is that in the studied companies risk taking was at the discretion of the CEO. Companies apparently could not distinguish between good and bad risks to take, and the decision about whether to take them rested on the shoulders of the CEO. But this not need be the case. Better transparency into the state of risk into the business would have provided more sunlight on these so-called risky decisions. In the current climate of risk management focus, boards complain that they don’t have good visibility into the state of risk in the enterprise, and judging from this study, this lack of visibility is causing poor performance, with companies investing in areas with sub par returns for the chance of a big win. This bodes well for the risk management business.

Prior to the onset of the Basel II Accord and its resulting loss event category structure, there was no existing or suggested standards for financial institutions in how to classify loss events and risks. The reality was that there was no need for a standard, as companies were not particularly focused on tracking loss events and identifying operational risks within a formal structure. As banks were nudged along the operational and enterprise risk management path by the regulators and Basel II, a need for guidance was evident and the Basel II loss event category structure emerged to meet the need. Of course, many financial institutions clung to the new standard and began to implement their programs. Although the Basel II category structure was largely designed for the classification of loss events, many institutions have been leveraging the taxonomy for a risk classification structure as well.

As financial Institutions gained experience in operational risk management and the implementation of such risk programs within their organizations, they began to question the business alignment and validity of the Basel II loss event category model. Various consortiums, industry associations, consultants, academic researchers, and analysts began to study the structure and started to poke holes in its loss type basis and alternate classification models began to emerge. The RMA joined with RiskBusiness to coordinate an effort with banks to establish standards for Key Risk Indicators, which resulted in a risk classification structure that is gaining popularity. The Operational Riskdata eXchange Association (ORX) formed as a consortium to provide a platform for the exchange of operational loss data, and in due course developed standards and a classification structure for its member financial institutions. We also see the BITS organization looking at loss and risk classification structures, as well as many articles that have been written on the topic.

The article speaks of the importance of organizing data in a sound and clear-cut manner, and reaches a conclusion that the Basel II loss event category structure falls short with too much allowance for inconsistency. Dr. Alvarez proposes a classification schema that is based on causes, as opposed to types of loss events, which leads to a more structured and consistent classification of loss events. I encourage you to read t his article, as it article represents the current thinking in the industry, which is that the causes of an event are important to identify and understand, and when an organization captures its loss data and views risks form the causal point of view, it is better enabled to analyze the data and more effectively manage and mitigate risks, thereby being more successful in lowering operational losses and increasing operating efficiencies.

There will likely be more debate and thought put into loss event and risk taxonomies over the next few years, and the industry’s need for an effective and consistent standard that could enable benchmarking of operational risk will help drive convergence to a widely accepted loss event and risk data classification schema.

When I took my first class on financial engineering as a naïve applied mathematics undergrad, we started with portfolio selection and the capital asset pricing model. In my typically confident (some might say arrogant) fashion, I decided I knew more than the professors, and that we should be focused on maximizing returns, rather than with the almost religious deference we were giving the notion of risk. A few case studies on LTCM (and modern hedge funds) brings into sharp relief the importance of risk. And yet, years later, I did it again. A few years ago, I claimed to be an expert on risk. In actuality, I was an expert on security, who knew very little about risk. In fact, I knew so little about risk, I had no idea how little I knew about it.

I come from the information security space. I spent a number of years there, and throughout my tenure, I continually abused the word “risk.” Oh, I had no idea I was doing it. In fact, 99% of my colleagues in security were doing the same thing. The fact of the matter is, the cloak and dagger security types, self-professed “security experts,” continue to misuse the word. It wasn’t until I really tried to peel back the onion and build a product that managed risk for the security space that I realized that what often passes for risk management in IT is actually control management and compliance. True risk management deals with uncertainty around unexpected losses – looking at consequences in business terms and weighing those against potential reward. Information security management, as currently practiced, is in most regards a necessary, but not sufficient, component of information risk management.

A little experience in different disciplines and verticals can make all the difference in the world. Financial Services is arguably the most sophisticated industry when it comes to managing risk. From a credit and market risk perspective, the average investment bank or hedge fund has teams of Ivy League PHDs running thousands of financial models 24×7 with a virtually unlimited budget on server farms with more firepower than NASA. From an operational risk perspective (much more analogous to information security), these same banks have taken the lessons they’ve learned in years of managing credit and market risk and have applied them to the more esoteric. Where they lack the hard, quantitative data of their peers, they’ve adapted clever ways of working around it.

Information security practitioners, on the other hand, are great at managing compliance by checklist. We have impressive standards, frameworks and regulations like ISO 17799, PCI, BITS, CobiT and a whole slew of others that are pretty good at spelling out a series of “thou shalt have’s.” NIST 800-30 even gives a set of guidelines for doing risk management for IT systems. So what’s missing?

Information Security standards and guidelines are a good thing, but they can be very easily misused and abused. They encourage cookie cutter thinking, and miss the bigger point – no two industries are the same. No two companies within an industry are the same. No two geographies within a company are the same. No two data centers within a company geography are the same. No two services run on hardware in the same data center are the same. No two business processes serviced by the same service are the same. And guess what? Depending on the time of the year, the needs of your customers and other factors, the same business process may have different needs on different days!

OK, clearly mapping all of those dependencies is hard. So, most organizations give a data sensitivity rating to their information assets. Maybe they get cute, and provide a “platinum, gold, silver, bronze” type scheme. Maybe they even set some arbitrary control thresholds based on this classification. So why do we have multiple large company executives going on record claiming that PCI compliance is too hard? Two things here – first, PCI is an ISO 17799 derivative. Second, with sensitive customer data sitting on these information assets, shouldn’t they have already warranted a platinum rating? Logically, it should follow that in any 17799 shop (many), information assets should be close to PCI compliant.

In reality, however, we all know that InfoSec groups are asked to do way too much with increasingly smaller budgets. It’s difficult to get management to buy into the need for information security, which exacerbates the problem. As such, it’s critical that we work smarter, not harder. If only there was a tool that let us do that…

Enter risk management. Throwing a set of checklist controls at our enterprise architecture is not risk management. Theoretically, it should result in some risk reduction, granted, but that’s not an optimal return on investment. Imagine running a hedge fund without a complex risk model for every conceivable position – running countless layers deep. You’d be insolvent within a month.

So what are the roadblocks to risk management in information security? The biggest is a lack of business context. For years, IT has talked about aligning to the needs of the business. It’s still a challenge. The fact of the matter is, it’s tough getting C-level executives to prioritize business objectives and processes amongst themselves (think politics, agendas, silos), much less as a deliverable for IT (who they see as less and less of a strategic asset). And even if they could agree on a real priority for those corporate objectives, navigating the rat’s nest down of dependency from the objective to the asset level would prove difficult for most organizations. As a result, it’s impossible to prioritize the consequence of an attack on a specific tangible thing.

That starts to cover the consequence side of things. How about impact? Actuaries have tables for flood rates, financial engineers have volatility metrics for options calculations. Unfortunately, it’s very difficult to compile reliable loss data on the IT side of the house. Difficult, but not impossible. We safeguard that information like it is customer data. But, if you look at our peers managing operational risk, there several initiatives around sharing anonymous loss data. Banks collaborate on internal loss metrics to quantify the costs and probability of fraud, malfeasance, etc. Back to security, TJX set aside a penny a share to cover their data breach, and current press estimates range from $12 – $25 million. (Many experts think these estimates are overwhelmingly optimistic, by the way). Are the metrics we have available perfect? Not even close. But qualitative factors are a good stopgap to supplement the limited quant data we have.

Don’t get me wrong – we have some brilliant people working information security. Brilliant people doing amazing things with limited budgets in a game with stakes that would make a high roller at the Bellagio head for the nickel slots. What we need is to buy them some leverage. Risk Management help information security professionals make better decisions faster, helping practitioners do more with less. Risk Management is a great tool to help information security practitioners work more efficiently – just don’t confuse the two.

I’ve been having conversations with customers and prospects about the value of an integrated risk management platform. (You can substitute ‘GRC’ for ‘integrated risk management’ if you’ve been reading any analyst covering the space recently.) There are lots of value drivers, but to date most CIOs haven’t embraced the logic yet and are opting instead to buy for very specific solutions areas. There are some exceptions, and on Friday I had a conversation with one of those exceptions, and he made a compelling case for why an IT organization should work with the business on an integrated control environment.

The specific case this customer made was around the need to manage the General Computing Controls associated with Sarbanes-Oxley. The finance side of the company had been the buyer of their SOX solution, and they, of course, look at the world through accounts and processes. Their SOX solution was configured accordingly, and all of their controls roll up to processes associated with accounts. Unfortunately, the IT organization doesn’t look at the world that way, and, according to this customer, “There’s nowhere in this model to stick the IT controls in a rational way.” The IT organization would much rather organize the GCCs by ISO 17799 or some other framework and associate each control to the appropriate risk in the finance model. In this way, the IT organization can leverage a control management structure already in place, without duplicating any effort.

This is the most basic value proposition for an integrated risk management platform. And many companies are seeing big savings as the number of regulations they are trying to manage increase. Sure, you can probably manage SOX in a bunch of spreadsheets, but try adding a couple more regs and reporting and policy management, and you’re very quickly into the realm of a purpose-built solution. The interesting problem is that that the cost of siloed solutions doesn’t fall fully in the office of the CIO. If it did, we would have many more CIO converts.

We’ve blogged frequently on the topic of IT risk management, most recently here. With recent events highlighting the need for better risk management, now, more than ever, people are thinking about how to improve their processes and technology for supporting their risk management programs. Ben Worthen over at the WSJ BizTech blog has written recently that tech departments shortchange risk management. We couldn’t agree more.

The basic problem, as Symantec’s Samir Kapuria notes via Ben’s post, is that IT tends to think of risk management as a project vs. a continuous process. This may be the result of the fact that most IT infrastructure vendors sell risk management for project delivery but don’t really have solutions that allow IT to take a risk-based approach to all their activities. It may also be the result of IT having to keep everything running, all the time. Regardless, unless you start with a top-down approach using a risk assessment process, identifying which vulnerabilities match to the most significant potential business impacts, you will never be able to allocate IT resources appropriately. Once you understand how realized risks will impact the business, you can take a truly risk-based approach to IT management. Obviously, we have a horse in this race, but any effort to tackle the IT risk management challenge must involve the business and identify the key risks therein.

On a daily basis, we’re out speaking with prospects, customers, analysts, press, and thought leaders in the GRC/ERM space. Over the course of the last year, we’ve heard many myths about risk management, and, over the course of the next couple weeks, we’ll address these myths. But we thought that we would give you a taste of what’s to come, so here is a list of the top 10 myths in risk management. Please feel free to add your own in the comments section. This list is certainly not exhaustive!

In November, I blogged about the difference between IT Risk Management and Information Security. For the full post, read here.

There’s a big different between tactical execution and strategic oversight. Therein comes the challenge with most information security programs; they place far too much emphasis on the how and what, and far too little on the why. Information risk management, on the other hand, is necessary to prioritize efforts, and concerns itself with the why.

The problem (and it’s a good problem to have) is that we’ve got a lot of great information available to us regarding how and what. There are libraries of control checklists from numerous standards organizations that provide great common practice guidance around how to secure information assets. As new vulnerabilities are discovered, new patches and workarounds are circulated and proactively communicated through a huge number of alerting services. Modern Information Security practices are mostly controls based — ie they focus on the what. They largely ignore the why — the element of business risk because it’s too difficult to understand.

Where this approach falls down is that there will always be far too much to do. There are too many vulnerabilities to remediate and too many controls to implement across the typical enterprise. As a result, critical deficiencies will go unmanaged. True risk management requires a business perspective on these deficiencies. Only with that business risk perspective is it possible to focus on doing the right things first. That’s lacking in the vast majority of modern businesses, and as a result, time is wasted and risk posture suffers.

We’ve blogged frequently on the topic of IT risk management, most recently here. With recent events highlighting the need for better risk management, now, more than ever, people are thinking about how to improve their processes and technology for supporting their risk management programs. Ben Worthen over at the WSJ BizTech blog has written recently that tech departments shortchange risk management. We couldn’t agree more.

The basic problem, as Symantec’s Samir Kapuria notes via Ben’s post, is that IT tends to think of risk management as a project vs. a continuous process. This may be the result of the fact that most IT infrastructure vendors sell risk management for project delivery but don’t really have solutions that allow IT to take a risk-based approach to all their activities. It may also be the result of IT having to keep everything running, all the time. Regardless, unless you start with a top-down approach using a risk assessment process, identifying which vulnerabilities match to the most significant potential business impacts, you will never be able to allocate IT resources appropriately. Once you understand how realized risks will impact the business, you can take a truly risk-based approach to IT management. Obviously, we have a horse in this race, but any effort to tackle the IT risk management challenge must involve the business and identify the key risks therein.

As we mentioned last week, during the heyday of buying for Sarbanes-Oxley (SOX) compliance solutions, many companies put in place technology platforms that now support a variety of risk and compliance initiatives. SOX solutions were generally purchased with the tacit approval of IT, but, given the range of solutions currently in deployment (spreadsheets, custom applications using Microsoft Access as a platform, and COTS SOX solutions), it is clear that IT never standardized on a strategy for managing risk and compliance data. The result is that today CIOs have an opportunity to either leverage their existing technology or put in place a standard platform to support risk and compliance data and processes.

The reality is that many CIOs continue to allow the business to buy disparate platforms for different GRC solutions. In numerous buying decisions, IT is at the table to support solution implementation rather than thinking about the long term strategic benefits of a common GRC platform. Just as disparate customer data marts drove down customer satisfaction rates and hampered sales efforts, leading to the rise of the CRM market, so too will scattered risk and compliance data marts cause an immense amount of pain for risk managers trying to get a clear picture of risk throughout the business.

ERM, similar to most business processes, is not a “one-size-fits-all” solution. It has to be customized and tailored for each firm. As Mark Olson of the Federal Reserve notes, “An effective enterprise-wide compliance-risk management program is flexible to respond to change and it is tailored to an organization’s corporate strategies, business activities and external environment.” (April 10, 2006)1

Companies that try to implement an out of the box methodology will likely fail. ERM methodologies and taxonomies must be adapted to a company’s legal, regulatory, economic and competitive environment, all of which can vary dramatically by industry and must, of course, be tailored to the company’s internal processes and culture. Further, the risk framework must be able to adapt to change over time to avoid losing competitive advantage.

I’ll be the last one to tell you that a strong central risk management function is a bad thing. Unfortunately, many organizations make the mistake of investing only in a centralized function because it’s too difficult to federate, and push risk management to lower levels of responsibility in the organization. It’s a classic consistency vs. quality of information problem.

Accurate information lies at the business line level – a manufacturing company’s CRO may not know that you’re throwing away millions of dollars a year due to a lack of quality suppliers, but the supplier quality manager certainly does. The challenge is that it’s traditionally very expensive to consolidate this local lower level information. Organizations attempt to survey and assess process owners, but the information comes back in various formats, of various levels of quality, and it leads to information silos – it’s impossible to get an apples to apples comparison. Out of frustration, many of these efforts fail, leading to a strong centralized risk function.

Organizations must augment their centralized risk management efforts with localized, distributed data, and the only to reliably do that is to invest in automated technology solutions.

Spreadsheet gurus have carved out a significant role in managing financial and operational data in many companies. The problem with this approach is that it’s a) manually intensive and b) highly reliant on the individuals that manage and operate these spreadsheets. Further, the processes for linking, updating and archiving data in spreadsheets is mostly ad hoc, leading to significant risks associated with this data.

Freddie Mac, for instance, in their 2005 annual report noted that their reliance on “end user computing systems” (read: Excel) posed a significant risk to their ability to report accurately on their financial data. More recently, other financial institutions have noted that the Fed and OCC are shining a light on this undocumented spreadsheet problem, looking for more transparency to the data in spreadsheets and file shares.

The reality is that using spreadsheets and file shares for risk and compliance data is a dead end. While companies may be able to get through one cycle of review with internal auditors, a regulator and/or rating agency, the long term implications of adopting a spreadsheet-based architecture for risk and compliance data are extremely problematic. Not only will risk managers have trouble getting visibility into the data because of poor reporting capabilities, but they will also rightly question the accuracy of the data itself. This skepticism is precisely why so many companies are moving off spreadsheets to a more programmatic approach to managing risk and compliance initiatives.

A traditional model to planning the audit process typically examines 10-20 risk factors for each element of the audit universe, and buckets each auditable entity into a risk categorization which will drive the frequency with which it is audited. While this approach may have worked well in the past, modern audit departments are being asked to do more with less. The known risk universe gets bigger by the day, and investing in a massive risk evaluation for each entity may not be the best use of resources. Is it worth tying up valuable stakeholders in management and on the audit committee to assess the risk inherent in the coffee procurement process for a remote sales office?

Progressive organizations are turning towards a more agile, top down approach to risk assessment to drive audit scheduling. This will lead to more efficient resource allocations, ensuring auditors are focused on the truly risk areas.

Attrition.org maintains a list of public, high profile data breaches. The list is staggeringly long, and goes back to the year 2000. TJX, while a high profile data breach and perhaps one of the biggest stories of 2007, is only one of the many that were publicly reported. And, companies have a vested interest in not making these events public. Add to that the breaches that happen every day that go undiscovered and it becomes clear that this staggeringly long list is just the tip of the iceberg.

But why is this list growing? Preventative technology and knowledge gets better and better every day. Shouldn’t we be getting safer? Information risk management is sometimes a thankless job. As an old mentor of mine used to say, a good day is a day where nothing happens. The villains get better and better every day, however, and the gap remains. Your organization is susceptible, and it’s critical you do everything you can to keep the gap as narrow as possible.

Tags

A tag is a keyword you assign to make a blog or blog content easier to find. Click a tag to find content that has been assigned that keyword. Click another tag to refine the search further. Click Find a tag to search for a tag that is not displayed in the collection.