Model Risk Management for CECL, 29 Mar 2018

Model risk management became a hot topic in the banking industry after the spectacular failures of so many loss reserve models in the US Mortgage Crisis. It's not enough to build a model, you need to show that it is sound and effective, and you need to decide what the criteria are for sound and effective before beginning to build a model. With the dramatic changes planned for computing loan loss reserves under CECL, model risk management should again be top of mind.

One of the primary regulatory documents on model risk management is SR 11-7 . This lays out the fundamentals for model risk management and modern governance. The following excerpt is relevant, because even before smaller organizations implement CECL, they must develop a model risk management plan. This study was part of such a plan, as is clear from the requirements below:

Evaluation of Conceptual Soundness. This element involves assessing the quality of the model design and construction, as well as review of documentation and empirical evidence supporting the methods used and variables selected for the model. This step in validation should ensure that judgment exercised in model design and construction is well informed, carefully considered, and consistent with published research and with sound industry practice.

Ongoing Monitoring. This step in validation is done to confirm that the model is appropriately implemented and is being used and performing as intended. It is essential to evaluate whether changes in products, exposures, activities, clients, or market conditions necessitate adjustment, redevelopment, or replacement of the model and to verify that any extension of the model beyond its original scope is valid. Benchmarking can be used in this step to compare a given model's inputs and outputs to estimates from alternatives.

Outcomes Analysis. This step involves comparing model outputs to corresponding actual outcomes. Back-testing is one form of outcomes analysis that involves the comparison of actual outcomes with model forecasts during a sample time period not used in model development at a frequency that matches the model's forecast horizon or performance window.

The results of the three core elements of the validation process may reveal significant errors or inaccuracies in model development or outcomes that consistently fall outside the banking organization's predetermined thresholds of acceptability. In such cases, model adjustment, recalibration, or redevelopment is warranted. At times, banking organizations may have a limited ability to use key model validation tools for various reasons, such as lack of data or of price observability. In those cases, even more attention should be paid to the model's limitations when considering the appropriateness of model usage, and senior management should be fully informed of those limitations when using the models for decision-making.

Numerous books, articles, and presentations are available on model risk management, such as by Scandizzo 2016 and Bennett 2017. Our article (Breeden and Liang, 2015) addressed the process of developing, validating, and approving models, challenger models, and benchmark models. All of that will still apply for CECL, although in scaled-down form for smaller lenders.

...where models and model output have a material impact on business decisions, including decisions related to risk management and capital and liquidity planning, and where model failure would have a particularly harmful impact on a bank's financial condition, a bank's model risk management framework should be more extensive and rigorous.

The above quote from SR 11-7 leaves no doubt that model risk management will be important for CECL Of course, the guidelines also allow for scaling the intensity of the model risk management process to the importance of the models and size of the lender. Clearly, CECL models will be high importance overall, but not equally so for smaller portfolios. Likewise, smaller lenders will have lower overall requirements, but some level of model risk management will always be required.

Another valuable passage from SR 11-7 is:

An integral part of model development is testing, in\break which the various components of a model and its overall functioning are evaluated to show the model is performing as intended; to demonstrate that it is accurate, robust, and stable; and to evaluate its limitations and assumptions.

To confirm that "the model is performing as intended", one must have an intent. As larger lenders have iterated through implementing model risk management, they have bootstrapped their way to guidelines and expectations on what constitutes an effective model. Now, the acceptance criteria already exist for any new model that will be developed. The lesson for those creating model risk management practices for CECL is simple. Before you build a model, decide how you will know if it is good.

With this as background, we often hear lenders about to implement CECL say, "Every conference that I go to says that I must implement all of the CECL models and choose which is best for my portfolio." Unfortunately, these are well-meaning platitudes for a conference presentation that are unhelpful in many ways.

First, there is no such thing as "all the CECL models". Yes, the CECL guidelines list examples of models, but SR 16-12 by the Federal Reserve, OCC, and NCUA clearly states, "Similar to the existing incurred losses model, the new accounting standard does not prescribe the use of specific estimation methods." The list of models given in the FASB document is stated in many places as an "example". Nevertheless, assuming that one wants to implement multiple models, the encouragement to "choose the best for my portfolio" leaves one wondering, "How will you choose?" The two most common answers are:

1. "The one I like best"

2. "The one that forecasts best"

The first answer clearly violates model risk management practice. Even worse is when lenders say, "I want multiple models so that I can choose which answer I like best each time." Epic fail. This is not how model risk management works.

For a model to be accepted, you must have clear criteria for selection. They can be a mix of quantitative and qualitative factors, but "We like the number" should not be on the list. As our CECL mortgage study proves (Breeden 2018), the answer can change dramatically by model type, but the reasons for choosing a model should be based on sound model risk management practices. Furthermore, once you have chosen a model, do not expect to be able to change models without sound reasons, i.e. a study was conducted that proved the alternate model was better for the following reasons.... Therefore, major lenders usually have a primary model, a challenger model where there are trying to resolve weaknesses in the primary model for the next release, and a benchmark model offering a sanity check on the results. Managing these three models is already a significant effort. Having more makes little sense.

The second criteria of "The one that forecasts best" sounds like a good idea, but there are several problems. CECL is a lifetime loss reserve calculation. The CECL number incorporates decisions, such as the distinction between foreseeable and unforeseeable, that have nothing to do with forecast accuracy. Also, as a lifetime loss forecast, we cannot expect to wait for the full life of a loan or have enough historic data to determine if the forecast was accurate. Many lenders are just now creating history for CECL modeling. Even if they have as much as five years of history, this would not capture a full economic cycle and would not allow for a full test of a five year auto loan, credit card, or longer term products. Conversely, a 12-month forecast accuracy test usually captures only the degree of discontinuity between the economic model and the most recent data. Clearly, for many lenders, even having enough data in-sample to estimate a model may be a stretch. How then should they "choose"?

Actually, this gets to the fundamental purpose of the DFA Mortgage Study and others like it that are under way. Showing how accuracy scales with the volume of charge-offs in the training data and how error grows with forecast horizon are solid, quantified information to allow lenders to choose a model type where their internal data may not allow a sound decision to be made.

My advice to lenders, from a model risk management approach, is to establish the criteria for which the best model will be chosen, use such industry studies to filter out the approaches that will be hopeless on the available data, and then dedicate the necessary resources to build a couple of appropriate model types -- in that order.