Saturday, April 30, 2011

Even as we agree to disagree about the usefulness or lack of the same of CAPM betas, let us reach consensus on a fundamental fact. To ignore risk in investments is foolhardy and not all investments are equally risky. Thus, no matter what investment strategy you adopt, you have to develop your own devices for measuring and controlling for risk. In making your choice, consider the following:

a. Explicit versus implicit: I know plenty of analysts who steer away from discounted cash flow valuation and use relative valuation (multiples and comparable firms) because they are uncomfortable with measuring risk explicitly. However, what they fail to recognize is that they are implicitly making a risk adjustment. How? When you compare PE ratios across banks and suggest that the bank with lowest PE ratio is cheapest, you are implicitly assuming that banks are all equally risky. Similarly, when you tell me to buy a technology firm because it trades at a PEG ratio lower than the PEG ratio for the technology sector, you are assuming that the firm has the same risk as other companies in the sector. The danger with implicit assumptions is that you can be lulled into a false sense of complacency, even as circumstances change. After all, does it make sense to assume that Citigroup and Wells Fargo, both large money center banks, are equally risky? Or that Adobe and Microsoft, both software firms, have the same risk exposure?

b. Quantitative versus qualitative: I am constantly accused of being too number oriented and not looking at qualitative factors enough. Perhaps, but I think the true test of whether you can do valuation is whether you can take the stories that you hear about companies and convert them into numbers for the future. Thus, if your story is that a company has loyal customers, I would expect to see the evidence in stable revenues and lots of repeat customers; as a result, the cash flows for the company will be higher and less risky. After all, at the end of the process, your dividends are not paid with qualitative dollars but with quantitative ones.

c. Simple versus complicated: Another mantra that I push is that less is more and to keep things simple. In fact, one reason that I stay with the CAPM is that it is a simple model at its core and I am reluctant to abandon it for more complex models, until I am given convincing evidence that these models work better.

So, find your own way of adjusting for risk in valuation but refine it and question it constantly. The best feedback you get will be from your investment mistakes, since they give you indicators of the risks you missed on your original assessment. As for me, I remain wedded to the fundamental principle that value is affected by risk but not to any risk and return model, which to mean just remains a means to an end.

In the last four posts, I laid our alternatives to the CAPM beta, but all of them were structured around adjusting the discount rate for risk. Having made this pitch many times in the past, I know that there are some of you who wonder why I don't risk adjust the cash flows instead of risk adjusting the discount rate. The answer to that question, though, depends on what you mean by risk adjusting the cash flows. For the most part, here is what the proponents of this approach seem to mean. They will bring in the possibility of bad scenarios (and the outcomes from these scenarios) into the expected cash flows and thus risk adjust them. As I will argue below, that is not risk adjustment.

It is true that there are two ways in which you can adjust discounted cash flow value for risk. One is to estimate expected cash flows across all scenarios, essentially multiplying the probability of each scenario by the likelihood of that scenario unfolding, and then to discount those expected cash flows using a risk adjusted discount rate. The other is to take the expected cash flows and replace them with "certainty equivalent" cash flows and discounting those certainty equivalent cash flows at the riskfree rate.

But what are certainty equivalent cash flows? To illustrate, let me provide a simple example. Assume that you have an investment, where there are two scenarios: a good scenario, where you make $ 80 instantly and a bad one, where you lose $ 20 instantly. Assume also that the likelihood of each scenario occurring is 50%. The expected cash flow on this investment is $30 (0.50*$80 + 0.50*- $20). A risk neutral investor would be willing to pay $ 30 for this investment but a risk averse investor would not. He would pay less than $ 30, with how much less depending upon how risk averse he was. The amount he would be willing to pay would be the certainty equivalent cash flow.

Applying this concept to more complicated investments is generally difficult because there are essentially a very large number of scenarios and estimating cash flows under each one is difficult to do. Once the expected cash flow is computed, converting it into a certainty equivalent is just as complicated. There is one practical solution, which is to take the expected cash flow and discount it back at just the risk premium component of your discount rate. Thus, if your expected cash flow in one year is $ 100 million, and your risk adjusted discount rate is 9% (with the risk free rate of 4%), the certainty equivalent for this cash flow would be:
Risk premium component of discount rate = (1.09/1.04)-1 = 4.81%
Certainty equivalent cash flow in year 1 = $ 100/ 1.0481 = $95.41
Value today = Certainty equivalent CF/ (1 + riskfree rate) = $95.41/1.04 = $91.74
Note, though, that you would get exactly the same answer using the risk adjusted discount rate approach:
Value today = Expected CF/ (1 + risk adjusted discount rate) = 100/1.09 = $91.74
Put differently, unless you have a nifty way of adjusting expected cash flows for risk that does not use risk premiums that you have already computed for your discount rates, there is nothing gained in this exercise.

There is two practical approaches to certainty equivalent cash flows that I have seen used by some value investors. In the first, you consider only those cash flows from a business that are "safe" and that you can count on, when you do valuation. If you do so, and you are correct in your assessment, you don't have to risk adjust the cash flows. The next time you are told that Buffett does not risk adjust his valuations, take a look at whether this is in fact what he is doing. The second variant is an interesting twist on dividends and a throw back to Ben Graham. To the extent that companies are reluctant to cut dividends, once they initiate them, it can be argued that the dividends paid by a company reflects its view of how much of its earnings are certain. Thus, a firm that is very uncertain about future earnings may pay only 20% of its earnings as dividends whereas one that is more certain will 80% of its earnings. An investor who buys stocks, based upon their dividends, thus has less need to worry about risk adjusting those numbers.

Bottom line. There are no short cuts in risk adjustment. It is no easier (and often more difficult) to adjust expected cash flows for risk than it is to adjust discount rates for risk. If you do use one of the short cuts - counting only safe cash flows or just dividends - recognize when these approaches will fail you (as they inevitably will) and protect yourself against those consequences.

As you can see from each of the alternatives laid out in the previous three parts, there are assumptions and models underlying each alternative that can make users uncomfortable. So, what if you want to estimate a model-free cost of equity? There is a choice, but it comes with a catch.

To see the choice, assume that you have a stock that has an expected annual dividend of $3/share next year, with growth at 4% a year and that the stock trades at $60. Using a very simple dividend discount model, you can back out the cost of equity for this company from the existing stock price:
Value of stock = Dividends next year / (Cost of equity - growth rate)
$ 60 = $3.00/ (Cost of equity -4%)
Cost of equity = 9%
The mechanics of computing implied cost of equity become messier as you go from dividends to estimated cash flows and from stable growth models to high growth models, but the principle remains the same. You can use the current stock price and solve for the cost of equity. For those of you who use Excel, the goal seek function or solver work very well at doing this job, even in the most complicated valuations.

This cost of equity is a market-implied cost of equity. If you are in corporate finance and need a cost of equity to use in your investment decisions, it would suffice. If you were required to value this company, though, using this cost of equity to value the stock would be pointless since you would arrive at a value of $ 60 and the not-surprising conclusion that the stock is fairly priced.

So, what point is there to computing an implied cost of equity? I see three possibilities.

One is to use a conventional cost of equity in the valuation and to compare the market-implied cost of equity to the conventional one to see how much "margin for error" you have in your estimate. Thus, if you find your stock to be undervalued, with an 8% cost of equity, but the implied cost of equity is 8.5%, you may very well decide not to buy the stock because your margin for error is too narrow; with an implied cost of equity of 14%, you may be more comfortable buying the stock. Think of it as a marriage of discounted cash flow valuation with a margin of safety.

The second is to compute a market-implied cost of equity for an entire sector sector and to use this cost as the cost of equity for all companies in that sector. Thus, I could compute the implied cost of equity for all banks of 9%, using an index of banking stocks and expected aggregate dividends on that index. I could then use that 9% cost of equity for any bank that I had to value. This, in effect, brings discounted cash flow valuation closer to relative valuation; after all, when we compare price to book ratios across banks, we are assuming that they all have the same risk (and costs of equity).

The third is to compute the market-implied cost of equity for the same company each period for a number of periods and to use that average as the cost of equity when valuing the company now. You are, in effect, assuming that the market prices your stock correctly over time but can be wrong in any given time period.

I use traditional models of risk and return to estimate costs of equity in valuation but I use market-implied costs of equity extensively. As those of you who track my equity risk premium estimates and posts know, I compute an implied equity risk premium for the S&P 500 every month, using exactly the approach described above (though I augment dividends with buybacks). When I value individual companies, I do compare my estimates of cost of equity with the market-implied estimates. Finally, when I am concerned that the beta for a firm is not reflecting its underlying risk, because the sector itself has changed, I compute a market-implied cost of equity for the sector. For instance, after the banking crisis in 2008, I felt that using the beta for a bank or even a sector-average beta to estimate the cost of equity made no sense, since much of the data used in the estimates reflected pre-crisis returns. Consequently, I used the S&P banking index to back out an implied cost of equity (which yielded an estimate almost 4% higher than the CAPM estimate) and used it in my valuations.

Analysts have generally had an easier time estimating the cost of debt than the cost of equity, for any given firm, for a simple reason. When banks lend money to a firm, the cost of debt is explicit at least at the time of borrowing and takes the form of an interest rate. While it is true that this stated interest rate may not be a good measure of cost of debt later in the loan life, the cost of debt for firms with publicly traded bonds outstanding can be computed as the yield to maturity (an observable and updated number) on those bonds.

Armed with this insight, there are some who suggest that the cost of equity for a firm can be estimated, relative to its cost of debt. Their intuition goes as follows. If the pre-tax cost of debt for a firm is 8% its cost of equity should be higher. But how much higher? One approach that has been developed is to estimate the standard deviation in bond and stock returns for a company; both numbers should be available if both instruments are traded. The cost of equity then can be written as follows:
Cost of equity = Cost of debt (Standard deviation of equity/ Standard deviation of bond)
Thus, in the example above, if the standard deviation in stock prices is 30% and the standard deviation in bond prices is only 20%, the cost of equity will be 12%.
Cost of equity = 8% (30/20) = 12%
In fact, an alternative to using historical standard deviations is to use implied standard deviations, assuming that there are options outstanding on the stock, the bond or on both.

While this approach seems appealing, it is both dangerous and has very limited use. Note that it works only for publicly companies that have significant debt outstanding in the form of corporate bonds. Since these firms are generally large market cap companies, with long histories, they also tend to be companies where estimating the cost of capital using conventional approaches is easiest. This approach cannot be used for large market companies like Apple and Google that have no debt outstanding or for any company that has only bank debt (since it is not traded and has no standard deviation). There is also the underlying problem that the risk of investing in equity (where you get residual cash flows, and the uncertainty is about the magnitude of these cash flows) is very different from the risk of investing in the company's bonds (where the risk is that you will not get promised payments - the upside is limited and the downside is high) and the ratio of their standard deviations may be a poor indication of risk, at least for individual companies. It also assumes that all of the risk in equity is relevant, even though a large portion of that risk may disappear in portfolios. Consequently, you will overstate the cost of equity for firms where the bulk of the risk is firm-specific and not market risk.

Notwithstanding these limitations, this approach can still be used as a check on costs of equity estimated using other approaches, especially for companies that have significant debt outstanding. Since the claims of equity investors can be met only after lenders' claims have been met, it is logical that the cost of equity should be higher than the pre-tax cost of debt, with the difference increasing with the proportion of cash flows being used to service debt payments. Using a simple proxy for this proportion - interest coverage ratio (operating income/ interest expense), for instance, I would hypothesize that the cost of equity will rise, relative to the cost of debt, as the interest coverage ratio decreases. Incidentally, this is the same rationale that we use to adjust betas for financial leverage, with beta increasing as the debt to equity ratio increases.

Friday, April 29, 2011

The conventional models for risk and return in finance (CAPM, arbitrage pricing model and even multi-factor models) start by making assumptions about how investors behave and how markets work to derive models that measure risk and link those measures to expected returns. While these models have the advantage of a foundation in economic theory, they seem to fall short in explaining differences in returns across investments. The reasons for the failure of these models run the gamut: the assumptions made about markets are unrealistic (no transactions costs, perfect information) and investors don't behave rationally (and behavioral finance research provides ample evidence of this).

With proxy models, we essentially give up on building risk and return models from economic theory. Instead, we start with how investments are priced by markets and relate returns earned to observable variables. Rather than talk in abstractions, consider the work done by Fama and French in the early 1990s. Examining returns earned by individual stocks from 1962 to 1990, they concluded that CAPM betas did not explain much of the variation in these returns. They then took a different tack and looking for company-specific variables that did a better job of explaining return differences and pinpointed two variables - the market capitalization of a firm and its price to book ratio (the ratio of market cap to accounting book value for equity). Specifically, they concluded that small market cap stocks earned much higher annual returns than large market cap stocks and that low price to book ratio stocks earned much higher annual returns than stocks that traded at high price to book ratios. Rather than view this as evidence of market inefficiency (which is what prior studies that had found the same phenomena had), they argued if these stocks earned higher returns over long time periods, they must be riskier than stocks that earned lower returns. In effect, market capitalization and price to book ratios were better proxies for risk, according to their reasoning, than betas. In fact, they regressed returns on stocks against the market capitalization of a company and its price to book ratio to arrive at the following regression for US stocks;
Expected Monthly Return = 1.77% - 0.11 (ln(Market Capitalization in millions) + 0.35 (ln (Book/Price))
In a pure proxy model, you could plug the market capitalization and book to market ratio for any company into this regression to get expected monthly returns.

In the two decades since the Fama-French paper brought proxy models to the fore, researchers have probed the data (which has become more detailed and voluminous over time) to find better and additional proxies for risk. Some of the proxies are highlighted below:a. Earnings Momentum: Equity research analysts will find vindication in research that seems to indicate that companies that have reported stronger than expected earnings growth in the past earn higher returns than the rest of the market.b. Price Momentum: Chartists will smile when they read this, but researchers have concluded that price momentum carries over into future periods. Thus, the expected returns will be higher for stocks that have outperformed markets in recent time periods and lower for stocks that have lagged.c. Liquidity: In a nod to real world costs, there seems to be clear evidence that stocks that are less liquid (lower trading volume, higher bid-ask spreads) earn higher returns than more liquid stocks. In fact, I have a paper on liquidity, where I explore the estimation of a liquidity beta and liquidity risk premium to adjust expected returns for less liquid companies.

While the use of pure proxy models by practitioners is rare, they have adapted the findings for these models into their day-to-day use. IMany analysts have melded the CAPM with proxy models to create composite or melded models. For instance, many analysts who value small companies derive expected returns for these companies by adding a "small cap premium" to the CAPM expected return:
Expected return = Riskfree rate + Market Beta * Equity Risk Premium + Small Cap Premium
The threshold for small capitalization varies across time but is generally set at the bottom decile of publicly traded companies and the small cap premium itself is estimated by looking at the historical premium earned by small cap stocks over the market. (In my 2011 paper on equity risk premiums, I estimate that companies in the bottom market cap decile earned 4.82% more than the overall market between 1928 and 2010.) Thus, the expected return (cost of equity) for a small cap company, with a beta of 1.20 would be:
Expected return = 3.5% + 1.2 (5%) + 4.82% = 14.32%
(I have used a riskfree rate of 3.5% and a mature market premium of 5% in my estimation)
Using the Fama-French findings, the CAPM has been expanded to include market capitalization and price to book ratios as additional variables, with the expected return stated as:
Expected return = Riskfree rate + Market Beta * Equity Risk Premium + Size beta * Small cap risk premium + Book to Market beta * Book to Market premium
The size factor and the book to market betas are estimated by regressing a stock's returns against the size premium and book to market premiums over time; this is analogous to the way we get the market beta, by regressing stock returns against overall market returns.

While the use of proxy and melded models offers a way of adjusting expected returns to reflect market reality, there are three dangers in using these models.a. Data mining: As the amount of data that we have on companies increases and becomes more accessible, it is inevitable that we will find more variables that are related to returns. It is also likely that most of these variables are not proxies for risk and that the correlation is a function of the time period that we look at. In effect, proxy models are statistical models and not economic models. Thus, there is no easy way to separate the variables that matter from those that do not.b. Standard error: Since proxy models come from looking at historical data, they carry all of the burden of the noise in the data . Stock returns are extremely volatile over time, and any historical premia that we compute (for market capitalization or any other variable) are going to have significant standard errors. For instance, the small cap premium of 4.82% between 1928 and 2010 has a standard error of 2.02%; put simply, the true premium may be less than 1% or higher than 7%. The standard errors on the size and book to market betas in the three factor Fama-French model are so large that using them in practice creates almost as much noise as it adds in precision.c. Pricing error or Risk proxy: For decades, value investors have argued that you should invest in stocks with low PE ratios that trade at low multiples of book value and have high dividend yields, pointing to the fact that you will earn higher returns by doing so. (In fact, a scan of Ben Graham's screens from security analysis for cheap companies unearths most of the proxies that you see in use today.) Proxy models incorporate all of these variables into the expected return and thus render these assets to be fairly priced. Using the circular logic of these models, markets are always efficient because any inefficiency that exists is just another risk proxy that needs to get built into the model.

I have never used the Fama-French model or added a small cap premium to a CAPM model in intrinsic valuation. If I believe that small cap stocks are riskier than large stocks, I have an obligation to think of fundamental or economic reasons why and build those into my risk and return model or into the parameters of the model. Adding a small cap premium strikes me as not only a sloppy (and high error) way of adjusting expected returns but an abdication of the mission in intrinsic valuation, which is to build up your numbers from fundamentals. I do think that it makes sense to adjust your expected returns for liquidity, and I think our capacity to do so is improving as we get access to more data on liquidity and better models for incorporating that data.

Thursday, April 28, 2011

The Capital Asset Pricing Model (CAPM) is almost fifty years old and it still evokes strong responses, especially from practitioners. In academia, the CAPM lives on primarily in the archives of old journals and most researchers have moved on to newer asset pricing models. To practitioners, it represents everything that is wrong with financial theory, and beta is the cudgel that is used to beat up academics, no matter what the topic. I have never been shy about arguing the following:
a. The CAPM is a flawed model for risk and return among many flawed models.
b. The estimates of expected return that we get from the CAPM can be significantly improved if we use more information and remember basic statistics along the way. (I argue for using sector betas rather than a single regression beta.)
c. The expected returns we get from the CAPM (discount rates in valuation and corporate finance) are a small piece of overall corporate finance and valuation. In fact, removing the CAPM from my tool box will in no way paralyze me in my estimation of value.

Notwithstanding this, I understand the discomfort that people feel with the CAPM at several levels. First, by starting with the premise that risk is symmetric - the upside and downside are balanced - it already seems to concede the fight to beat the market. After all, a good investment should have more upside than downside; value investors in particular build their investment strategies around the ethos of minimizing downside risk while expanding upside potential. Second, the model's dependence upon past market prices to get a measure of risk (betas after all come from regressions) should make anyone wary: after all, markets are often volatile for no good fundamental reason. Third, the CAPM's focus on breaking down risk into diversifiable and undiversifiable risk, with only the latter being relevant for beta does not convince some, who believe that the distinction is meaningless or should not be made.

Consequently, both academics and practitioners have been on the lookout for better ways of measuring risk and estimating expected returns. In this post, which will be the first of a few, I want to look at alternatives to the CAPM that stay with its core set-up, where the risk of an investment is measured relative to the average risk investment and expected returns are derived accordingly:
E(Return) = Riskfree Rate + Beta of investment (Expected Risk Premium for all risky investments)
Note that in this set up, the riskfree rate and expected risk premium are the same for all investments in a market and that beta alone carries the burden of measuring risk. The fact that betas are scaled around one provides for a simple intuitive hook: an investment with a beta of 1.2 is 1.2 times more risky than the average investment in the market. I have extended papers on how best to estimate the riskfree rate and expected equity risk premium.

I. Multi Beta Models
Contrary to conventional wisdom, which views theorists as cult followers of beta, the criticism of the CAPM in academia has been around for as long as the model itself. While the initial critiques just argued that CAPM betas did not do very well in explaining past returns, we did see two alternatives emerge by the late 1970s.
- The Arbitrage Pricing Model, which stays true to conventional portfolio theory, but allows for multiple (though unidentified) sources of market risk, with betas estimated against each one.
- The Multifactor model, which uses historical data to relate stock returns to specific macro economic variables (the level of interest rates, the slope of the yield curve, growth rate in the GDP) and estimates betas for individual companies against these macro factors.
Both models represent extensions of the CAPM, with multiple betas replacing a single market beta, with risk premiums to go with each one.Pluses: Do better than the CAPM in explaining past return differences across investments.Minuses: For forward looking estimates (which is what we usually need in corporate finance and valuation), the improvement over the CAPM is debatable.Bottom line: If you don't like the CAPM because of its complexity and its assumptions about markets, you will like multi beta models even less.

II. Market Price based Models
The CAPM beta can be written as follows:
CAPM Beta = Correlation between stock and market * Standard deviation in returns of stock/ Standard deviation in returns of market
The instability in this estimate comes from the correlation input, which can be volatile and change dramatically from period to period. One alternative suggested by some is to dispense with the correlation entirely and to estimate the relative risk of a stock by dividing its standard deviation by the average (or median) standard deviation across all stocks. For instance, the median annualized standard deviation across all US stocks between 2008 and 2010 was 57.01%. The relative standard deviation scores for two firms - Apple and 3M - can be computed using their annualized standard deviations over the same period: Apple's standard deviation was 42.66% and 3M's standard deviation was 25.17%.
Apple's relative standard deviation = 42.66%/ 57.01% = 0.75
3M's relative standard deviation = 25.17%/57.01% = 0.44
These take the place of the CAPM betas and get used with the riskfree rate and equity risk premium to get expected returns.Pluses: Standard deviations are easier to compute and more stable than correlations (and betas)Minuses: No real economic rationale behind the model. Treats all risk as equivalent, whether it can be diversified away or not.Bottom line: For those who want relative risk measures that look closer to what they would intuitively expect, it is an alternative. For those who do not like market based measures, it is more of the same.

III. Accounting information based Models
For those who are inherently suspicious of any market based measure, there is always accounting information that can be used to come up with a measure of risk. In particular, firms that have low debt ratios, high dividends, stable and growing accounting earnings and large cash holdings should be less risky to equity investors than firms without these characteristics. While the intuition is impeccable, converting it into an expected return can be problematic, but here are some choices:
a. Pick one accounting ratio and create scaled risk measures around that ratio. Thus, the median book debt to capital ratio for US companies at the start of 2011 was 51%. The book debt to capital ratio for 3M at that time 30.91%, yielding a relative risk measure of 0.61 for the company. The perils of this approach should be clear when applied to Apple, since the firm has no debt outstanding, yielding a relative risk of zero (which is an absurd result).b. Compute an accounting beta: Rather than estimate a beta from market prices, an accounting beta is estimated from accounting numbers. One simple approach is to relate changes in accounting earnings at a firm to accounting earnings for the entire market. Firms that have more stable earnings than the rest of the market or whose earnings movements have nothing to do with the rest of the market will have low accounting betas. An extended version of this approach would be to estimate the accounting beta as a function of multiple accounting variables including dividend payout ratios, debt ratios, cash balances and earnings stability for the entire market. Plugging in the values for an individual company into this regression will yield an accounting beta for the firm. While this approach looks promising, here are some cautionary notes: accounting numbers are smoothed out and can hide risk and are estimated at most four times a year (as opposed to market numbers which get minute by minute updates).Pluses: The risk is related to a company's fundamentals, which seems more in keeping with an intrinsic valuation view of the world.Minuses: Accounting numbers can be deceptive and the estimates can have significant errors associated with them.Bottom line: If you truly do not trust market prices, use accounting data to construct your risk measures.

The reason for the CAPM's endurance as a model is simple. It provides a way of estimating the required returns and costs of equity for individual companies at low cost, by requiring only one input: a market beta. For those who like that aspect of the model, but don't like the baggage that comes with the model, relative standard deviations and accounting betas provide an alternative. For those who like the theoretical underpinnings of the model but do not like the poor estimates that it yields, the arbitrage and multifactor models should appeal. For those who contest the very basis of the approach, I will look at alternatives in the next few posts.

Saturday, April 16, 2011

I have lost count of the number of times I have been taken to task for not mentioning "margin of safety" in my valuation and investment books. In general, the critique is usually couched thus: "Instead of using beta or some other portfolio theory risk measure, why don't you look at the margin of safety?". While I see the intuitive value of paying heed to the "margin of safety", I don't see the two as alternative measures of risk. In fact, I think that risk measures in valuation and margin of safety play very different roles in investing.

I know that "margin of safety" has a long history in value investing. While the term may have been in use prior to 1934, Graham and Dodd brought it into the value investing vernacular, when they used it in the first edition of "Security Analysis". Put simply, they argued that investors should buy stocks that trade at significant discounts on value and developed screens that would yield these stocks. In fact, many of Graham's screens in investment analysis (low PE, stocks that trade at a discount on net working capital) are attempts to put the margin of safety into practice.

In the years since, there have been value investors who have woven the margin on safety (MOS) into their valuation strategies. In fact, here is how I understand how a savvy value investor uses MOS. The first step in the process requires screening for companies that meets good company criteria: solid management, good product and sustainable competitive advantage; this is often done qualitatively but can be quantifiable. The second step in the process is the estimation of intrinsic value, but value investors are all over the map on how they do this: some use discounted cash flow, some use relative valuation and some look at book value. The third step in the process is to compare the price to the intrinsic value and that is where the MOS comes in: with a margin of safety of 40%, you would only buy an asset if its price was more than 40% below its intrinsic value.

The term returned to center stage a few years ago, when Seth Klarman, a value investing legend, wrote a book using the term as the title, published in 1991. In the book, though, Seth summarizes the margin of safety as "buying assets at a significant discount to underlying business value, and giving preference to tangible assets over intangibles". Seth is a brilliant thinker (I love the letters he writes to investors..) and the book has original and interesting ways of looking at risk. I learned a great deal about the ethos of value investing but it did not alter the fundamental ways in which I approached estimating intrinsic value, only the ways in which I used that value.

The basic idea behind MOS is an unexceptional one. In fact, would any investor (growth, value or a technical analyst) disagree with the notion that you would like buy an asset at a significant discount on estimated value? Even the most daring growth investor would buy into the notion, though she may disagree about what to incorporate into intrinsic value. To integrate MOS into the investment process, we need to recognize its place in the process and its limitations.

1. Stage of the investment process: Note that the MOS is used by investors at the very last stage of the investment process, once you have screened for good companies and estimated intrinsic value. Thinking about MOS while screening for companies or estimating intrinsic value is a distraction, not a help.Proposition 1: MOS comes into play at the end of the investment process, not at the beginning.

2. MOS is only as good as your estimate of intrinsic value: This should go without saying but the MOS is heavily dependent on getting good and unbiased estimates of the intrinsic value. Put a different way, if you consistently over estimate intrinsic value by 100% ore greater, having a 40% margin for error will not protect you against bad investment choices.

That is perhaps the reason why I have never understood why MOS is offered as an alternative to the standard risk and return measures used in intrinsic valuation (beta or betas). Beta is not an investment choice tool but an input (and not even the key one) into a discounted cash flow model. In other words, there is no reason why I cannot use beta to estimate intrinsic value and then use MOS to determine whether I buy the investment. If you don't like beta as your measure of risk, I completely understand, but how does using MOS provide an alternative? You still need to come up with a different way of incorporating risk into your analysis and estimating intrinsic value. (Perhaps, you would like me to use the risk free rate as my discount rate in discounted cash flow valuation and use MOS as my risk adjustment measure... That's an interesting choice and worth talking about ... I know that Buffett claims to do something similar, but he discounts only the cash flows that he believes he can count on, making his cash flows risk adjusted cash flows.)

I know.. I know... There are those who argue that you don't need to do discounted cash flow valuation to estimate intrinsic value and that there are alternatives. True, but they come with their own baggage. One is to use relative valuation: assume that the multiple (PE or EV/EBITDA) at which the sector is trading at can be used to estimate the intrinsic value for your company. The upside of this approach is that it is simple and does not require an explicit risk adjustment. The downside is that you make implicit assumptions about risk and growth when you use a sector average multiple... The other is to use book value, in stated or modified form, as the intrinsic value. Not a bad way of doing things, if you trust accountants to get these numbers right... Proposition 2: MOS does not substitute for risk assessment and intrinsic valuation, but augments them.

3. Need a measure of error in intrinsic value estimate: If you are going to use a MOS, it cannot be a constant. Intuitively, you would expect it to vary across investments and across time. Why? The reason we build in margins for error is because we are uncertain about our own estimates of intrinsic value, but that uncertainty is not the same for all stocks. Thus, I would feel perfectly comfortable buying stock in Con Ed, a regulated utility where I feel secure about my estimates of cash flows, growth and risk, with a 20% margin of safety, whereas I would need a 40% margin of safety, before buying Google or Apple, where I face more uncertainty. In a similar vein, I would have demanded a much larger margin of safety in November 2008, when macro economic uncertainty was substantial, than today, for the same stock.

While this may seem completely subjective, it does not have to be so. If we can bring probabilistic approaches (simulations, scenario analysis) to play in intrinsic valuation, we can not only estimate intrinsic value but also the standard error in the estimates.Proposition 3: The MOS cannot and should not be a fixed number, but should be reflective of the uncertainty in the assessment of intrinsic value.

4. There is a cost to having a larger margin of safety: Adding MOS to the investment process adds a constraint and every constraint creates a cost. What, you may wonder, is the cost of investing only in stocks that have a margin on safety of 40% or higher? Borrowing from statistics, there are two types of errors in investing: type 1 errors, where you invest in over valued stocks thinking that they are cheap and type 2 errors, where you don't invest in under valued stocks because of concerns that they might be over valued. Adding MOS to the screening process and increasing the MOS reduces your chance of type 1 errors but increases the possibility of type 2 errors. For individual investors or small portfolio managers, the cost of type 2 errors may be small because there are so many listed stocks and they have relatively little money to invest. However, as fund size increases, the costs of type 2 errors will also go up. I know quite of few larger mutual fund managers, who claim to be value investors , who cannot find enough stocks that meet their MOS criteria and hold larger and larger amounts of the fund in cash.

It gets worse, when a MOS is overlaid on top of a conservative estimate of intrinsic value. While the investments that make it through both tests may be great, there may be very few or no investments that meet these criteria. I would love to find a company with growing earnings, no debt, trading for less than the cash balance on the balance sheet. I would also like to play shortstop for the Yankees and slam dunk a basketball and I have no chance of doing any of those and I would waste my time and resources trying to do so.
Proposition 4: Being too conservative can be damaging to your long term investment prospects.

So, let's call a truce. Rather than making intrinsic valuation techniques (such as DCF) the enemy and portraying portfolio theory as the black science, value investors who want to use MOS should consider incorporating useful information from both to refine MOS as an investment technique. After all, we have a shared objective. We want to generate better returns on our investments than the proverbial monkey with a dartboard... or the Vanguard 500 Index fund...