Bill Nelson

Executive Vice President and Chief Economist

William Nelson is an Executive Vice President and Chief Economist at the Bank Policy Institute. Previously he served as Executive Managing Director, Chief Economist, and Head of Research at the Clearing House Association and Chief Economist of the Clearing House Payments Company. Mr. Nelson contributed to and oversaw research and analysis to support the advocacy of the Association on behalf of TCH’s owner banks.

Prior to joining The Clearing House in 2016, Mr. Nelson was a deputy director of the Division of Monetary Affairs at the Federal Reserve Board where his responsibilities included monetary policy analysis, discount window policy analysis, and financial institution supervision. Mr. Nelson attended Federal Open Market Committee meetings and regularly briefed the Board and FOMC. He was a member of the Large Institution Supervision Coordinating Committee (LISCC) and the steering committee of the Comprehensive Liquidity Analysis and Review (CLAR). He has chaired and participated in several BIS working groups on the design of liquidity regulations and most recently chaired the CGFS-Markets Committee working group on regulatory change and monetary policy. Mr. Nelson joined the Board in 1993 as an economist in the Banking section of Monetary Affairs. In 2004, he was the founding chief of the new Monetary and Financial Stability section of Monetary Affairs. In 2007 and 2008, he visited the Bank for International Settlements, in Basel, Switzerland, where his responsibilities included analyzing central banks’ responses to the financial crisis and researching the use of forward guidance by central banks. He returned to the Board in the fall of 2008 where he helped design and manage several of the Federal Reserve’s emergency liquidity facilities.

Mr. Nelson earned a Ph.D., an M.S., and an M.A. in economics from Yale University and a B.A. from the University of Virginia. He has published research on a wide range of topics including monetary policy rules; monetary policy communications; and the intersection of monetary policy, lender of last resort policy, financial stability, and bank supervision and regulation.

The annual Federal Reserve CCAR stress test begins with a design of stress scenarios. The severely adverse scenario is the most important, because it is almost always the one that binds. Under the Federal Reserve’s published standard, the severely adverse scenario should be constructed to match “…severe post-war U.S. recessions…” and “generate scenarios that…do not induce greater procyclicality in the financial system and macroeconomy.”[1] (Tests are “procyclical” if they amplify the business cycle, for instance by imposing tougher tests when times are bad and “countercyclical” if they impose less severe tests when times are bad.) To mimic severe recessions observed in the post war period, the Fed simulates stressful conditions by assuming that the unemployment rate rises by at least 3 to 5 percentage points; to limit procyclicality, the increase is always set to bring the unemployment rate up to at least 10 percent. Thus, the procedure produces larger increases in the unemployment rate when the unemployment rate is low. The peak unemployment rate is reached over the course of 6 to 8 quarters.

In this blog post we examine whether the Fed’s scenario design methodology accomplishes the stated objectives. We conclude that, when evaluated using the assessment of the outlook that the Fed staff provides the Federal Open Market Committee (FOMC), the supervisory stress scenarios are countercyclical but extraordinarily implausible.

Background

Prior to each FOMC meeting, Federal Reserve staff forecasts key macroeconomic variables and confidence intervals around those forecasts and are summarized in the “Greenbook,” or, starting in 2010, the “Tealbook.” The forecasts are released to the public with a five-year lag. We collected these forecasts from March 2004 (when the confidence intervals were first introduced) to December 2011 for unemployment six quarters forward, and the 70 percent confidence intervals for that estimate. The confidence intervals are calculated “…using the real-time track record of the staff forecast and using stochastic simulations of our large-scale econometric model.”[2] Exhibit 1 is an example of the staff forecast and confidence intervals around those estimates.[3] We focus on the forecast at six quarters forward because that is the longest horizon for which the staff projection is consistently available.[4]

Analysis

Our objective is to look over this historical period, apply the Fed’s stress test methodology to construct peak projected unemployment rates that would likely have been used for a stress test had they been conducted over that period and determine the cyclicality and level of likelihood that the Fed staff preparing projections for the Greenbook would have assigned those peak unemployment rates. Following the published stress test guidelines, we assume that the Federal Reserve’s assumption about the unemployment rate in the stress test scenario will be 4 percentage points above the contemporaneous rate or 10 percent—whichever is higher.[5]

We take the positive half of the 70 percent confidence interval of the Greenbook forecasts for the unemployment rate 6-quarters forward to be the standard deviations of their estimate.[6] We then calculate, at each Greenbook date, by how many standard deviations our constructed stress test assumption is above the Greenbook projection. The result is plotted as the red line in Exhibit 2. We also plot as the blue line the difference between the contemporaneous unemployment rate and the Fed staff’s estimate of the natural rate of unemployment (the “unemployment gap”) at the date of the Greenbook. We use the unemployment gap as a proxy for the state of the business cycle – times are bad when the unemployment gap is high and good when the unemployment gap is low.

Results

We reach two conclusions. First, as can be seen in Exhibit 2, the methodology used by the Fed to construct the unemployment rate in the severely adverse scenario of the annual stress test is countercyclical as was intended. In particular, the unemployment rate under the severely adverse scenario we construct for each Greenbook date is 10-20 standard deviations above the Greenbook forecast when the unemployment gap is low versus about 5-10 standard deviations above the Greenbook forecast when the gap is high. In other words, based on the measure of forecast uncertainty the Fed staff provided the FOMC, the Fed’s method of constructing the peak unemployment rate under the severely adverse scenario produces a projection that is less likely and therefore more severe when times are good then when times are bad.

Second, the projections of the unemployment rate created using the stress test formula for the severely adverse scenario are extraordinarily unlikely. The most likely constructed stress test assumption (the minimum point on of the red line) is 4.25 standard deviations above the Greenbook projection with corresponding odds of 1 in 100,000. That is, if the authors of the Greenbook had been asked at the time of writing to put odds on the unemployment rate being equal to or above the rate implied by the stress test formula, they would have said “1 in 100,000.” And that is the most likely assumption over our 7 year sample period! Over the sample period, the average number of standard deviations for a stress test assumption is 8 while the maximum is 20, levels for which our computer says the odds are “1 in infinity.” Indeed, when we construct the stress unemployment rate assumption using the Fed’s procedure for the entire post war period, we find that the unemployment rates over the stress test horizon never actually reach the stress test assumptions, coming close only once (during the recent financial crisis). Exhibit 3 reproduces Exhibit 1 (the Fed staff unemployment forecast) but with a simulated stress test peak unemployment rate added as the black dot, which as can be seen, is literally off the chart.

These results strongly suggest that the procedure used by Fed supervisory staff to construct the stress test scenarios results in assumptions that are too draconian. Certainly, one can debate how likely a severely adverse stress scenario should be. By establishing its standard as consistent with “severe post war U.S. recession[s]” (of which there have been five since 1945), the Federal Reserve appears to be suggesting that the severely adverse scenario should be one likely to occur every 10 years or so.[7]Such a standard could be defended as reasonable, given that the economic costs of higher capital are imposed every day, so the benefits in terms of resiliency in crisis should be borne with some reasonable frequency. In numerous blog posts and research notes over the past year, TCH has called attention to the extreme severity of the scenarios used in the stress tests (see, for example, “TCH Research Note: 2016 Federal Reserve’s Stress Testing Scenarios”), and to the attendant risk that the stress tests will lead banks to substitute away from cyclically sensitive types of lending such as lending to small businesses and households with less than pristine credit scores (see, for example, “The Capital Allocation Inherent in the Federal Reserve’s Capital Stress Test” and “Are the Supervisory Bank Stress Tests Constraining the Supply of Credit to Small Businesses?”).

As this note demonstrates, however, the current CCAR severely adverse scenarios are wildly inconsistent with the Federal Reserve’s self-imposed standard. What would a more realistic stress scenario look like? As a potential alternative construct that moves at least in the direction of reasonableness, we reran our analysis using a 3 percentage point increase in the unemployment rate with an 8 percent floor to construct the stress test assumption. We use an 8 percent floor so that the assumed change only rises up above 3 percentage points when the unemployment rate falls below 5 percent, roughly equal to the sustainable level of unemployment identified by the median FOMC participant (4.5 percent).[8] With those parameters, the most likely scenario had odds of occurring of 1 in 1500. While the scenarios generally still had likelihoods of effectively zero, the most likely scenario in this case had a better than 50 percent chance of occurring over the next thousand years.

Disclaimer: The views expressed in this post are those of the author(s) and do not necessarily reflect the position of The Clearing House or its membership.

[6] A “standard deviation” is a measure of the precision of a statistical estimate. Often, an interval one standard deviation above and below the mean of the estimate covers approximately 70 percent of the observations.