Science Versus Art in Risk Management: Lessons from Merrill Lynch

Merrill Lynch’s former head of market risk oversight sounds off on what he sees as the failures of current risk management practices. Is he right?

The belief that science is better than art or going by the gut is becoming increasingly widespread in U.S. society. We see it in the push for testing and teacher standards in education; in the rise of Sabermetrics and the Moneyball philosophy in baseball; and perhaps most significantly in the current thinking about risk management in banking and financial services. This is being fueled by the big data-driven explosion of information available to organizations for analysis, new and advanced modeling and quantification tools and capabilities, and an influx of physicists, statisticians and quants into the industry – not to mention regulatory requirements for stress tests and other numbers-based reporting. The conventional wisdom is that stricter adherence to the science of risk management -- models, tools, objective calculations -- could have prevented the worst of the subprime meltdown and global financial crisis.

A fascinating article in the New York Times suggests that this is not necessarily the case. Jesse Eisinger, a reporter for Times partner ProPublica, spoke with John Breit, a former physicist who went into the private sector and eventually became head of market risk insight for Merrill Lynch (he resigned from the position in 2005). Breit, now retired, offers some very astute and disturbing analyses of the state of risk management in financial services. Specifically, this quant contends that there's too much reliance on science and numbers in financial services risk management. At Merrill Lynch, Eisinger writes, Breit "learned that his job was really psychologist, confessor and detective. He became the financial version of a counterintelligence officer, searching for the missed clues and hidden dangers in the firm's trading strategies." The article continues:

Instead of fixating on models, risk managers need to develop what spies call "humint" -- human intelligence from flesh and blood sources. They need to build networks of people who will trust them enough to report when things seem off, before they become spectacular problems. Mr. Breit, who attributes this approach to his mentor, Daniel Napoli, the former head of risk at Merrill Lynch, took people out drinking to get them to open up. He cultivated junior accountants. "They see things first," he said. "Almost every trading debacle was sitting on some accountant's desk."

According to Breit, most bad trades have more to do with hubris and delusion than with dishonesty or deliberate intent to game the system.

Most traders who get into trouble, he thinks, aren't bad guys. The bad ones, who try to cover up improper trades, are relatively easy to detect. The real threat, he said, comes from the "crazy ones" who really believe they've found ways to spin flax into gold. They can blow up a firm with the best of intentions.

But the current practice of risk management -- driven at least in part by regulatory requirements -- is not conducive to detecting these potential disasters. According to Breit, "Regulators have reduced risk managers to box checkers, making sure they take every measure of risk and report it dutifully on extensive forms. 'It just consumes more and more staff, turning them into accountants and rotting brains.'"

Breit believes risk management procedures should be less standardized in order to reflect the strategies, approaches and decisions of particular firms. At the same time, risk managers should have more autonomy as well as more open communications with senior management, including the CEO, he says:

"The cynic in me thinks this is all in the interests of senior management and regulators to avoid blame. They may not think they can prevent the next crisis, but they then can blame the statistics." Instead, Mr. Breit says he believes that regulators should encourage firms when they reach different conclusions on what is risky and what is safe. That creates a diverse ecosystem, more resilient to any one pestilence. And the regulators should empower risk managers by finding out how many times they meet with chief executives and what they have recently vetoed, and by judging whether the traders respect the executive. "It's all completely unquantifiable and vague," he said, adding that a risk manager should be divorced from the profit and loss statement, the one "who throws sand in the gears."

Breit ultimately felt marginalized at Merrill Lynch and may have an axe to grind; and of course it's always possible to be insightful in hindsight. But I think he offers some really provocative observations here, and they don't apply only to risk management. What are the limits of data, rules and standards? When is flexibility, nuance and imagination appropriate?

It's not as simple as saying, "Let's hire English majors and philosophers, not scientists." Furthermore, given the intense and generally unfriendly regulatory and political scrutiny on banks, it's unlikely we're going to see any organizations buck the science and quantification trend. At least not before the next crisis.

The emphasis on data and models has a lot to do with accountability. If you have the all these numbers and analysis in front of you, it's easier to find the person/people to blame. When the financial crisis occurred there wasn't any small group of people who were held accountable, and that angered a lot of people. But that also ignores the systemic issues that led to the crisis. It's the same thing that's going on in education. They're collecting the data to zero in on bad teachers, who can then be held accountable for student failures. But it doesn't address issues that could be spread throughout the system.