Board Education

We calculate an extensive amount of bank peer data each quarter. Over the past year or so I’ve been asked on several occasions if our database includes the Texas Ratio. While we don’t regularly compute it, I did query & calculate our data to generate some Texas Ratio peer stats. Here are a few quick facts using bank data as of 12/31/2009 (see note #1):

337 banks have a Texas Ratio of greater than 100%

the typical Texas Ratio ranges between 3.5% and 36.1%

the average is 27.7%

Nearly one-third of the banks above the 100% mark are in Georgia or Florida

To give you some perspective on the 337 number, four years ago as of 12/31/2005 there were only two(2) banks with a Texas Ratio above 100%. Quite a change.

Computing the Texas Ratio is pretty simple using the data from the call report:

Note #1: We used the preliminary FFIEC call report data available in mid February of 2010. As a result our sample size was only 7036 banks, some call report data was not yet public available at that time. Also, some banks may have re-filed their call report since then.

If we think of back-testing as something of a QC process it’s easy to understand this first type of back-test, the Rubber Stamp. The image of a quality inspector with his “inspected by” sticker probably comes to mind. This is the likely source of the words “independent” and “third-party” that show up in the communication about back-testing from regulators.

The black box At some level your model is a black-box cash-flow calculator. The calculator was designed by someone else and it should follow some pretty fundamental rules, like 1+1=2. Or, more specifically for example, given the maturity date, outstanding principal, current rate, etc. it should be able to calculate the principal payments for a given loan correctly. Does it calculate interest correctly? Can it determine if a loan should reprice given the next reprice date, rate cap, and rate floor? etc.

Once someone has verified that the model can calculate the correct cash-flows for a single instrument, they should verify that it can handle multiple instruments. Your loan or CD portfolio is made up of more than one account isn’t it? Therefore there should be some test of model’s scalability.

Vendors should have an independent firm audit and validate (or certify) the calculation engine or black-box part of their model. Their aren’t any “official” industry-accepted certifications that A/L model vendors can get. However the vendor should be able to obtain an opinion letter, similar to an audit opinion letter that a public accounting firm might offer, about the model’s ability to calculate cash-flows correctly.

Other due diligence The rubber stamp back-test also involves obtaining additional standard documents from your model vendor. This is primarily to establish credibility for your model. This is important. Are you using an established credible model (A/L BENCHMARKS, Farin, Darling, Profitstars, IPS-Sendero(Fiserv), Plansmith, Bancware, etc.) or are you using “Fred’s (from accounting) Model”? While using a model from an established vendor is not a guarantee, it certainly helps on the “gain confidence in your model” front.

Every vendor should be able to communicate some sort of standard policies and procedures including a privacy policy, security policy, and a disaster recovery plan.

Don’t stop here Immediately following the publication of OCC Bulletin 2000-16 there was a scramble by model vendors (outsourced and in-house) to obtain an independent review of their black-box. At the time the entire industry assumed that if you had the review, you satisfied the requirements of the back-test. That’s a pretty dangerous assumption. Anyone who has run an A/L model knows that it’s the assumptions made, more than anything else, that ultimately determine whether a model is “good” or “bad”. Knowing that the black-box can add 1 + 1 correctly helps, but it’s only a small part of the back-test process.

At the heart of any earnings-at-risk measurement is a base forecast or projection. In addition to the base forecast we apply some sort of stress-test (usually a rate shock up and down). I frequently come across bankers who endeavor to get the stress-test “right” or “wrong”. They are convinced that they can design a “realistic” projection and test to measure their exposure.

As I’ve mentioned in more than one of my presentations and also here on my blog – like it or not this projection and stress-test often times comes down to something only slightly better than crystal ball gazing. There’s a good article in last Friday's New York Times with an introduction that sums this up quite nicely. The article itself is an opinion about increased regulatory oversight. It is a warning that the country should not be overly optimistic about how successful improved oversight will be. The initial comments about the reliability of forecasts are quite insightful:

Perhaps the best place to start is to acknowledge what we cannot do. If recent events have taught economists and policy makers anything, it is the need for humility.

One thing we cannot do very well is forecast the economy. The recent crisis and recession caught most economists flat-footed. This is nothing new. We have never been good at foretelling the future, but when the news is favorable, others forgive our lack of prescience.

Some critics say the Federal Reserve should have foreseen the bursting of the housing bubble and its financial aftershocks. A few of them, having made the correct call themselves, are enjoying newfound celebrity.

Yet at any time, there are many forecasters with a large range of views. After the fact, a few will turn out to be right, and many wrong. Policy makers at the Fed don’t know in advance who will be the lucky few. Their best course is to rely on the consensus forecast and to be ready for the inevitable surprises.

In this series I’m going to address seven different ways you can back-test your model. Note that these are not discrete steps that must be taken in order. In fact some of these back-tests you may never use. Regardless of which back-tests you choose the overall goal of back-testing is the same.

What is back-testing?

Actions we can take that will give us confidence in our model.

A process that helps us do a better job of modeling in the future.

Back-testing is about gaining confidence. It’s asking what can we do better? It’s sort of a quality control process to make sure these models are providing us with reasonable results. The important thing to keep in mind is that back-testing is not a one time thing. It’s not “one and done”.

In conducting back-testing…[our] focus should be prospective rather than retrospective. Instead of trying to determine how smart, precise, stupid, or lucky we were [with our models] in the past, it’s important to examine prior forecasts and strategies to glean some insight and understanding that we can apply to improve future risk measurement [or modeling]…

What they are saying is that it’s a process. For each back-test we run it’s not necessarily the results themselves that matter but rather it’s what we learn about our model that counts. With that knowledge we can modify and make appropriate changes to our model.

In the 20+ years I’ve been working with A/L models I’ve seldom come across modeling reports that are “right” or “wrong”, only results that are "reasonable” or “unreasonable”.

So the purpose of a back-test is to gain confidence in the model, and to learn how to make the model better in the future.

I’ll cover each of the seven ways in a separate post - follow the links, 1 through 7, shown at the right (above). Also here’s a link back to the introduction.

Several weeks ago the Maryland chapter of FMS asked if I would be interested in presenting to their group. They wanted me to cover a hot topic in asset/liability management and modeling. While several issues like credit quality, liquidity, capital, and interest rate risk (and oh yea, the bad economy in general) top the list, there’s a specific issue that I think has been bugging our clients for several years. The issue is model back-testing.

Ever since OCC Bulletin 2000-16 (Guidelines for Model Validation) was released the pressure has intensified to back-test the models we use to measure interest rate risk (and liquidity risk). All of these communications from the examiners mention model back-testing in one way or another:

However, given the amount of communication, there’s a surprising lack of clarity about just what a back-test is. Don’t get me wrong I think most folks have a notion of what “running a back-test” means. But if asked to tackle the back-test problem they’re not sure where to start. (side note: The industry can’t even agree on the appropriate word(s) to use. Is it “backtesting” or “back-testing”? Popular media seems to use the one word. I’ve seen many other places use the hyphenated word. I’ll stick to “back-testing” since I also talk about stress-testing quite a bit. The hyphenated usage seems to make more sense.)

For instance if I asked you to think about some words or phrases when you hear “run a back-test” what comes to mind? Here are just a few words we hear from clients when they call and ask us for help with back-testing a model:

It probably wouldn’t shock you to learn that I’ve seen this sentence regarding back-testing. It uses eight of the word listed above:

A bank should audit the model by obtaining an independentthirdpartyvalidation or review which periodically back-tests the model inputs, assumptions, and outputs.

That sounds nice and official doesn’t it? Well ok, I made it up. I didn’t actually find this sentence in anything I’ve read, but that sounds like examiner and audit speak. The problem is that it really doesn’t give you a place to start. How do you go about “running a back-test”? What is the first step?

Over the next several weeks I’m going to run a series of posts that will be a practical guide to model back-testing. In fact that was the title of my presentation to the Maryland chapter of the FMS this week. My 50 minute presentation seemed to be well received. I’d welcome your comments on each of the upcoming posts. I’ll start here – Seven ways to back-test your model.

For banks that have floors on a measurable portion of their variable rate portfolio rising rates could present problem that wasn’t really anticipated. Traditionally loan rate floors are supposed to limit down-side risk, but in this unusually low rate environment the rate floors are essentially making a chunk of the variable rate portfolio behave like fixed-rate…even when rates rise.

[rising rates] could actually hurt banks' margins because so many of the industry's business loans have hit the so-called floor, or the minimal rate a lender and borrower agree to when negotiating a commercial mortgage or property loan. …"Lending rates might not all move up because there are a lot of floors in place…Margins could get squeezed."

It was a good article offering plenty of anecdotal evidence about loans being “below their floor”. The CEO of BNC Bancorp remarked that “most of its commercial borrowers are at a floor rate of 5% to 6% right now. The bulk of those loans are priced at prime plus a half or prime plus one. So they'd be paying a rate of 3.75% to 4.25% if the floor wasn't there.”

I started thinking that it would be nice to know if this experience was typical. Are many other community banks experiencing the same thing? If so, on average how far “below the floor” are their variable rate loans? What is typical? The answers to these questions are critical for community banks to have a better understanding of how rising rates will impact their loan portfolio yields.

We run our A/L BENCHMARKS model for about 200 community banks across the country. As part of the process we collect detail loan data to model portfolio sensitivity. With this detail we can tell if an individual loan is fixed rate or variable rate. If it’s variable rate we know its current rate, pricing index, spread, and its floor (if any). From this information we can tell if a loan is at its floor and how far “below the floor” the index plus spread is.

For 4th quarter 2009 we’ve collected data for over 650,000 different loans. Total dollars of outstanding balance is $39.5 billion. About half of the outstanding balance, around $19 billion, is variable rate. Of these variable rate loans a full 20% of them are currently “at their floor”. For each of these loans we computed what the loan rate would be if the floor wasn’t in place. The difference between this computed rate and the loan floor represents how far “below the floor” the loan is. Or, too look at it differently, how much will the loan index (in this case Prime Rate) have to move in order for the variable rate to price up?

The data shows that loans with floors are on average -185 basis points “below the rate floor”. This means that Prime rate will have to move up more than 185 basis points before these loans will start repricing again. The typical range is -250 basis points below to -75 basis points below (typical range is the middle sixty-percent of all loans with floors). The largest amount below the floor is –775 basis points and the smallest is 0 basis points – meaning the loan index plus spread is currently equal to the floor.

The overall impact of this will vary from bank to bank depending on the portfolio mix – variable versus fixed rate. It also depends on how many loans actually have floors. Again this is bank specific.

Now that we know that Prime rate will have to move up on average 185bp before income on these loans starts moving, it puts a new spin on our understanding of the +100, +200, and +300bp stress-tests…doesn’t it?

Here are the highlights that I think are important to our A/L BENCHMARKS customers and to community banks in general, aka “Main Street” banks. You’ll have to read elsewhere for an opinion on what this means for “Wall Street” banks (the big guys).

1) Use an IRR measurement tool that captures the types of risks you have on your balance sheet. There are essentially four types of interest rate risk: maturity-repricing risk, yield-curve risk, basis risk, and option risk. Through the variety of data files, inputs, and assumptions we use in A/L BENCHMARKS we are capable of modeling all these risks, regardless of the portfolio in which they are found.

2) The Board has the ultimate responsibility for the risk undertaken. This is why, in addition to providing the in-depth analysis and reports, we provide the Board Report showing the magnitude and direction of risk of the bank quarter-to-quarter and compared to banks of similar size.

3) Measure IRR from a short-term and long-term perspective. We measure both earnings-at-risk (short-term), and equity-at-risk (long-term). Further reading:

4) Be sure you understand the underlying assumptions and analytics used by the model. I hope you don’t pay for our service, provide us with little input, and then plop the reports in the bottom drawer of your desk. They make explicit warnings here about the use of third-party models for the reason just stated. First there is a considerable amount of insight into the methodologies that is documented right in the reports. Additionally, the managerial assumptions that you provide via the Service Kit can be found in the back of the Executive Report and should be reviewed quarterly. Finally, you can also contact us to review your assumptions and/or even schedule regular report reviews with your ALCO or Board.

5) Earnings at risk should be measured using a 1-year, 2-year, 5-year, and/or a 7-year time frame. By default we run the earnings simulation using a 1-year time frame. I disagree with the advisory’s assertion that “…IRR exposures are best projected over at least a two-year period. Using a two-year time frame will better capture the true [risks]”. For most bank’s an earnings forecast of 1-year is at best just thoughtful guesswork. A 2-year (or longer) earnings projection often enters the realm of fantasy. Ridiculous as these longer projections sound – we can run these in A/L BENCHMARKS if needed. (Note: Apparently they also disagree since just a few paragraphs later the Advisory admits that earnings simulations have “limitations” in quantifying IRR exposure, see point #7 below).

6) Earnings simulations can either be “Static” or “Dynamic” The base forecast for the earnings simulation can either be a flat-balance sheet forecast (static), or it can incorporate growth and new business (dynamic). We can model your earnings simulation whichever way you prefer. Dynamic growth can be entered via the Service Kit under “Balance Sheet Projection.”

7) Economic value-based models should be used to broaden the assessment of IRR exposure Essentially they say that earnings simulations, because they capture only a specific time-frame (1-year, 2-year, etc.), may miss certain risks that exist on the balance sheet. To address this you should look at economic value of equity (EVE) at risk which focuses on longer-term time horizons and captures all expected future cash-flows. Further reading:

8) Just running a +/-200 basis point stress-test is not sufficient Many risks, option-risks in particular, don’t show up until you’ve significantly changed the market rate environment. It can also be enlightening to see what happens between a zero and 200bp shift (i.e. +-100bp). A/L BENCHMARKS regularly reports 100, 200, and 300 basis point shocks. The Advisory suggests (given today’s rate environment) that +400bp may even be prudent and we can easily run this for you. They also suggest rate ramps, and curve twists as other possible alternatives. For most banks these additional scenarios only provide additional interesting and anecdotal information. I’ve seldom seen a risk exposure appear on a rate ramp analysis that we didn’t already see or know about in the shock analysis. Nevertheless, if you need to see a rate ramp stress-test we can run that for you.

9) The regulators recognize that a 100, 200, and 300 basis point shock is probably sufficient for most banks. The actual text is this, “The regulators recognize that not all financial institutions will require the full range of the scenarios discussed above. Non-complex institutions (e.g., institutions with limited embedded options or structured products on their balance sheet) may be able to justify running fewer or less intricate scenarios, depending on their IRR profile. However, interest rate shocks of sufficient magnitude should be run, regardless of the institution’s size or complexity”.

Without sounding too snarky, I’ll believe it when I see it. Most field examiners I’ve worked with believe that more analysis is better. If you can run seven shock stress-test scenarios (base, +/100,+/-200, and +/-300) why not run seven more with a rate-ramp, a twist, etc? It’s bound to turn up something interesting right?…Well, no. There’s the law of diminishing returns at work here. We can spend a lot of time and money to run these additional scenarios, but we’re unlikely to uncover some hidden exposure when we run that extra fourteenth or fifteenth interest rate risk stress-test scenario.

Let’s also not forget the time involved to run such stress tests ultimately takes the banker away from the core business of running the bank (or running other stress-tests to measure other risks like liquidity and credit quality).

10) You should pay attention to key assumptions like prepayment speeds and core-deposit sensitivity Absolutely. These are the “big two” assumptions that drive the bank’s overall sensitivity. This is true for core-deposit behavior especially. Updated information for both of these inputs should be provided to us each quarter so that we more accurately reflect your IRR profile. Further reading:

11) Back test the model and learn how to use the model better Your forecasts are always going to be off…there’s no getting around that. You can however, learn from your “mistakes”. For example, if you keep including a projection of 5% growth in DDA’s, but in reality end up funding the bank by growing your brokered CD portfolio, you might want to adjust your projections to reflect the actual behavior. A back test of a forecast isn’t really a “right” or “wrong” test (I can easily tell you that most likely your forecasts will be “wrong”, as your crystal ball just isn’t that good). A back test highlights things that we could be forecasting better (so we’ll be “less” wrong in the future).

12) Have someone independently review your modeling process This is an essential part of any good modeling process. Although like the back test, this is often misinterpreted as some sort of checklist item, “Has the model been independently reviewed…it has?…check”. That’s the wrong way to think about this. An independent review should be an ongoing process with pieces and parts of it being done every quarter. Further reading here:

That’s about it. It is interesting to see the list of references at the end of the document. Almost all of them reference material that was produced prior to 2001. Again this just shows that there’s really nothing new here, they are indeed just reiterating what’s already been communicated.

The FDIC just published the most recent edition of their Supervisory Insights (Winter 2009). There are some interesting topics covered this quarter and, given my firm’s focus on measuring interest rate risk for community banks, one article in particular caught my eye:

The best quote from the article comes in just the third paragraph, “Recent FDIC Call Report data suggest financial institutions are becoming…more exposed to increases in interest rates.” I certainly agree with this observation. Data sited in the article supports the case: higher concentrations of longer-term assets and more use of less-stable funding sources. However while they mention that simulation is the best way to measure interest rate risk exposure, they don’t run a simulation themselves and therefore can’t present any simulation results - fortunately Olson Research can.

Our summary short-term interest rate risk measurement shows that indeed more banks are exposed rising rates. While the level of net interest earnings at risk exposure remains about the same, between 6.5% and 8.5%, the number of banks exposed to rising rising rates has increased (see graph at the right). When the target Fed Funds rate reached its lowest point in the 4th quarter of 2008 the number of banks exposed to rising rates was 64%. After three more quarters of historically low rates the number of banks exposed to rising rates is 71%.

The longer we stay in this low rate environment the more “strange” behaviors show up when we run earnings simulations for our bank clients. Since the sharp decline in interest rates began back in September 2007, I can’t count the number of conversations I’ve had with community bank CFOs about modeling loan rate floors. Credit problems aside, banks were better prepared to handle this down-rate cycle compared to the huge drop in rates after the attacks on the twin-towers in New York. (Again, I said credit problems aside, I’m only talking about interest rate risk and loan pricing).

Following 9/11, a typical commercial loan didn’t have a loan rate floor written into the contract. As a result loan yields fell right along with market rates creating quite a margin squeeze. This time around banks that are weathering the credit crisis storm are a little better prepared. Many have established rate floors and are enjoying the benefits of a much wider spread to funding costs than they otherwise would have had.

But there’s down-side to this position lurking just around the corner. It’s a problem that you typically don’t consider when establishing a loan rate floor. The best way to understand the problem is to show you a real-world example. Here’s a loan straight out of one of our client’s portfolios: Owner-occupied commercial-real-estate $1M, variable-rate Prime+25 basis points, reprices monthly, with a floor of 5.50%. Let’s look at the rate behavior in four different stress-test environments: 2nd quarter 2007, 1st quarter 2008, 3rd quarter 2008, and finally 2nd quarter 2009. I’ll call these the base behavior, classic behavior, expected behavior, and finally the unexpected behavior.

1) Base behavior – 2nd quarter 2007 The base case is pretty trivial. It is the 2nd quarter of 2007 and Prime Rate is at 8.50%, so the loan is priced at 8.75%. If market rates rise or fall the loan’s rate moves in lock-step up or down. A rate swing up +200bp moves the loan’s rate to 10.75%, and a rate swing down –200bp takes the loan’s rate down to 6.75%. The green line makes it easier for us to compare the base rate to the various rate change environments. It simply traces the base case rate across all scenarios for easy reference. The red line shows the loan’s rate floor of 5.50%. As of 2007Q2 none of the stress-tests, even the –300bp down, causes the loan’s rate to reach the floor. .

2) Classic behavior – 1st quarter 2008 Starting in September 2007 market interest rates began to steadily decline. By now, the 1st quarter of 2008, Prime rate is down to 6.00%. Our example loan’s rate is now 6.25%. The stress-test is showing what I call the “classic” loan rate floor behavior. If market rates rise the loan’s rate moves in lock-step up. However, if rates fall our loan’s rate only falls a small amount. In fact if market rates fall by more than –75 basis points our example loan’s rate will reach its floor of 5.50%. This is “classic” because it’s just what we intended to happen if market interest rates fall, at some point we (the bank) will be protected from falling rates. The model shows this in classic fashion. .

3) Expected behavior – 3rd quarter 2008 Moving ahead to the 3rd quarter of 2008 we see Prime rate dive to 4.50%. Without a rate floor our example loan would now be 4.75%, but the floor stops the slide at 5.50%. From a modeling standpoint we see no change in loan rate from the base case to any of the down-rate scenarios. We’re locked-into the floor of 5.50%. We’ve eliminated our down-side risk and still have all the up-side potential when rates begin to rise again. .

4) Unexpected behavior – 2nd quarter 2009 By 2nd quarter of 2009 we’ve reached the bottom (we hope) and Prime rate is 3.25%. In the base-case we see no change in our loan’s rate of 5.50%. What we do see is a peculiar behavior in the rates-up shocks. Because of the rate floor, this loan now looks more like a fixed rate loan than a variable rate loan. In fact the only stress-test shock that will change the loan’s rate is the up +300bp shock. Now if market rates rise by +300bp my loan’s rate will only change by +100bp. Contrast that with the loan’s rate behavior modeled back in the “base behavior (#1)”. Back then a +300bp shift in market rates meant a +300bp change in loan rate, now we would only see a third of that change.

Situation #4 is still where we are today. For banks that have floors on a measurable portion of their variable rate portfolio rising rates could present problem that wasn’t really anticipated. Traditionally loan rate floors are supposed to limit down-side risk, but in this unusually low rate environment the rate floors are essentially making a chunk of the variable rate portfolio behave like fixed-rate…even when rates rise.

Still it’s better than having no floor at all. Those banks that have integrated floors into their loan pricing have preserved higher loan yields (and therefore earnings and capital) at a time of heightened liquidity and credit quality risk.