Alpha Theory Blog - News and Insights

105 posts categorized "Portfolio Strategy"

November 02, 2018

Not every position is better off following the model position size (optimal) determined by Alpha Theory. However, the times when optimal outperforms are associated with higher forecast accuracy. If you put better forecasts into the model, the model does better. This is a straightforward demonstration of Garbage In-Garbage Out.

Correlation of Actual and Forecasted Returns for Positions that Under/Overperformed Optimal

Models are data dependent. When good data is input in the model, the model has higher predictive power. Bad data in and, well, it doesn’t have the same edge. The correlations hold if we expand into quartiles.

Correlation of Actual and Forecasted Returns for Positions that Under/Overperformed Optimal

And largely holds for deciles:

Correlation of Actual and Forecasted Returns for Positions that Under/Overperformed Optimal

What you’ll notice is that the correlation overall between actual and forecasted returns is fairly small with the highest decile showing an 18% correlation. Even though the signal is faint, it is strong enough to power a model that produces positive returns.

As the data shows, it is worth taking the time to measure your historical forecasting skill. If you have positive forecasting skill, then a simple model can dramatically improve results.

October 14, 2018

“Objectivity is gained by making assumptions explicit so that they may be examined and challenged.” – Richard Heuer, Psychology of Intelligence Analysis

Alpha Theory asks investors for a few basic inputs (used to calculate an expected return):

• How much can I make if I’m right?

• How much could I lose if I’m wrong?

• What are the probabilities of each?

When I tell folks that they MUST have these forecasts to make investment decisions, I often get a response of “sure, I can come up with them, but I have no idea if they are going to be right.” They’re basically conceding that since they’re not sure if they’re going to be accurate, then they’re not going to do it. The problem with that logic is that firms are using something to pick stocks. Position sizes don’t come out of thin air. When pressed to describe how a decision is made, these firms will describe a process that sounds very familiar to the expected return calculation. They “generally” come up with a price target. They discuss and debate downside risk. They talk about conviction level. My belief is that managers feel better about discussing the inputs in the abstract or implied sense, rather than making them explicit because they can’t be sure how “right” their explicit assumptions will be. If they do make the inputs explicit, they would rather have them all componentized on a sheet, instead of combined into a single expected return. I believe this is because of the misconception that one bad input spoils the whole calculation.

Granted, a bad input reduces the efficacy of the result but doesn’t nullify it. But this train of thought still misses the point. The real issue is that the same good or bad inputs are going into the managers own “mental” calculation of expected return and position size. The “garbage in-garbage out” dilemma dominates whether the process is explicit or in the manager’s head. Only by making the calculation explicit do you avoid the cognitive errors of mental calculation (see the quote at the beginning of this article). Intuition and instinct and experience aren’t mitigated by making inputs explicit, they’re just externalized so they can be properly weighed and judged.

Try an experiment. Talk through a portfolio position, going through every aspect you find relevant and ask the manager, “what is the expected return and what is the right position size?” Now do the same thing and determine an explicit reward price, risk price, and the probability of each. Use those to calculate an expected return and position size. See which process is more accurate, more repeatable, and more easily monitored. I believe you’ll find that the explicit process gives you greater confidence, better communication, and improved returns with less risk.

The chart below is from our friend Michael Mauboussin and his son’s work on this topic: “If You Say Something Is Likely…” and is a great example of why implicit conversations should be more explicit. Imagine a scenario where an analyst states that the management team is “below average.” Do they mean they’re a 4 on a scale of 1 to 10 or a 1? Does “really good” balance sheet means 6 or 9?

See below another sample set confirming similar results:

I’ve been “spreading the gospel” about using expected return in portfolio management for eight years and have had over 2,000 meetings. I’ve noticed a change in investor mentality over that time and the biggest shift is the attitude towards the process. In the beginning, I had to convince managers that they needed an explicit process to be successful. Now, my anecdotal estimate is that half of the managers I meet with already realize they need to create a more explicit process. The “chasm” has been crossed and the advantages gained from using an explicit process to pick and size stocks are moving from a competitive advantage to a cost of doing business. If a fund is still relying on instinct and heuristics to manage the fund in a few years they are going to get left behind by those that embrace the process. As an analog to the shift towards the process, look at the adoption of Moneyball in all sports over the 90s and 00s. Moneyball went from a competitive advantage to a cost of doing business in a matter of a decade. But unlike sports franchises which can weather long droughts of poor performance, a fund that doesn’t lead will cease to exist. Good research and stock selection will always be paramount to success. But the great process is the only way to make sure great research turns into great results.

October 01, 2018

The 8 Mistakes Money Managers Make was an article I released almost five years ago. As much as I would love to report that its publication has cured all ills, the mistakes are still prevalent today. I believe its time for a second circulation of “the mistakes.”

The process to fix the mistakes is easy. The human behavior change is hard. I hope that this article will both show some easy to implement processes and start conversations about how to change your own behavior and that of your firm.

We’re here to help. We have solutions and services designed to help managers who want to improve and outpace their competition stuck in a “pre-Moneyball” mindset.

September 08, 2018

What is your 6th best idea? If you run a portfolio, the answer should be at your fingertips. The issue is that for an overwhelming majority of the managers I’ve spoken with, it is not. Portfolio management, in its simplest form, is allocating more capital to the better ideas and less to the weaker ideas. If you can’t quickly determine your 6th best, then there are almost certainly mistakes. Mistakes come in the forms of great ideas with too little capital that leave potential return on the table and weak ideas with too much capital that add too much risk.

The first step, admitting there is a problem 😊 Step two, determine how you measure an ideas quality. It’ll end up being some mix of expected return, return hurdle, risk potential, conviction level, liquidity, etc. These are factors that every portfolio manager considers when sizing positions, but generally, each factors importance is weighed in a portfolio manager’s head. To be able to answer the question, what is my 6th best idea, these “rules” need to made explicit so that they can be externalized and run the same way against every asset in real time.

The new model approximates what you would have previously used your mental calculator to solve. The new model isn’t perfect but gives you an explicit answer you can debate. It will highlight inconsistencies like when your 6th best idea is your 16th largest. Then the question becomes, should we add to this position or is there a reason that the model doesn’t account for?

Ask yourself if you can quickly determine your 6th best idea today. If not, reflect on how your process would improve if you had an idea quality rank compared to its position size. If you want to see a system like this working in practice, let us know and we’ll show you a version with your own data.

August 17, 2018

One of the members of our Customer Success team was wondering about the difficulty of getting client attention at the end of August. We ran an analysis to try and answer the question, “how active are our clients by month?” We used price target updates, logins, and trades per month as a proxy for investor activity.

August was definitely the softest month, but clients weren’t as “checked out” as we expected. We hypothesized that the peak periods would be during earnings season and troughs will be after earnings. Here’s the rub, they’re in the same month. The end of second-quarter earnings season and the before school vacation season are in the same month.

To remedy this fact, we created periods starting on the 15th of each month (i.e. August 15th to September 15th). This allows us to catch each earnings season as its own isolated period. Here are the results:

There is clear seasonality. The post Q2 earnings season is 2.5 standard deviations from the norm. I suspect that if we broke this down into two-week tranches, we would have seen even more pronounced deviation from August 15th to August 31st.

As expected, the Post Earnings Season cohort’s activity was light at 0.7 standard deviations below normal activity, while the During Earnings cohort was busy (+0.8).

One of my favorite parts of working at Alpha Theory is that we have a long series of robust, structured data that allows us to ask and answer interesting questions. If you would like to be able to do the same, the first step is collecting and maintaining well-structured data. Then you can ask interesting questions like “what season do we make our most money?”, “who is the best forecaster on my team?”, “how often do stocks go below our risk targets?”, etc.

July 05, 2018

In preparation fora webinar we hosted about the Concentration Manifesto on June 21st we had a client question using batting average (win percentage) as a way of measuring skill. Their contention was that high batting averages do not always result in great returns, because a low hit rate with high asymmetry (lots of upside with little downside) can be even more profitable than predictable low returners.

To analyze that point, we looked at the Return on Invested Capital (ROIC) by the same buckets we analyzed batting average.

You can see that there is a similar correlation. Assets that are sized the largest had the highest return on invested capital. Said another way, the Top 5 positions went up an average of 12.1% while the portfolio as a whole went up 8.4% (for shorts, went down 8.4%). That’s 50% better!

We then analyzed the distribution of returns by bucket.

Again, you can see a predictive quality in manager position sizing. Stocks that have smaller positions have a wider distribution of returns (and more downside). The smallest positions had the most upside, but what we see in the data is that managers can forecast more volatile positions and size accordingly.

To finish the point, I’ll pull up a chart from the original Concentration Manifesto where we use our clients’ forecasted returns (Expected Return) and created two portfolios. One with the 20 best forecasted returns and then the rest. In the graph below, you can see that managers can forecast which assets will have the best returns. This shows skill not associated just with positions sizing, but on forecasting price return.

There is very little question that our clients demonstrate skill. There is also very little question that they have mitigated a substantial portion of their skill by having too many positions.

June 04, 2018

I was working with a client recently and we were discussing their use of discounted cash flow analysis (DCF). Most of our clients are value investors, so DCF is a key tool for many of our clients, especially when valuing businesses where the major value to be unlocked is more than a year in the future.

Here is the problem. The client was double discounting the risk-premium in their discount rate. What exactly does it mean?

Below is a simple DCF, where there is a single stream of cashflows. The investor picks a terminal date and terminal multiple and then discounts the Terminal Value back to today. The discount rate is usually a combination of the risk-free rate and a risk premium (cost of capital) that accounts for the “riskiness” of the stream of cashflows. One of the biggest challenges is the sensitivity of the discount rate. Small changes of large impacts on the total value (there is a 15% difference in valuation if I use 2% above or 2% below the current discount rate).

The most subjective assumption in the analysis above is the risk-premium in the discount rate. It is required when looking at a single stream of cash flows. But, for investors that use scenario analysis, a risk-premium isn’t required. That’s because the risk premium (the “riskiness” of the cash flow streams) is accounted for in the forecast of risk scenarios with probabilities:

In this case, only the risk-free rate is needed in the discount rate. The probabilities and multiple scenarios account for the “riskiness” of the cash flow streams.

This benefits scenario-based investors in three ways:

1. NO RISK PREMIUM: The Risk Premium assumption is subjective and creates extreme sensitivity in DCF analysis. Removing this step reduces the noise in the analysis.

2. NO DOUBLE COUNTING: Using this approach means that there is no double counting of risk (risk premium + Risk scenario).

3. EFFECTIVELY ACCOUNT FOR RISK SCENARIOS: What’s the right risk premium to add into the discount rate for a Risk scenario that is bankruptcy. 12%? 18%? 26%? It’s a question that’s not required to be answered when there is an actual probability weighted scenario that includes bankruptcy as part of the entire analysis (now how you size a position by scenario is a topic for another blog).

I think there is general confusion about using DCFs and scenario analysis. For most, DCFs came first. We learned to build them with a single stream of cash flows that were discounted back to present value. We learned scenario analysis at a different time and merged them together on our own. There is overlap in those two methods and hopefully this article will prompt a discussion for those funds using both DCF and scenario analysis.

May 03, 2018

In my last post, I discussed the negative impact of positive skew for active managers. Basically, that more than 50% of all stocks in a given market underperform the average because there are stocks that go up more than 100% but no stocks that go down more than 100%. This means that if you pick a random portfolio of stocks from the market, you have a greater than 50% chance of underperforming the market because most portfolios will not hold those few stocks that went up more than 100%.

Because of the popularity of the last post and TV appearance, we spent time digging further into the data to answer questions posed by readers and viewers. We noticed that there was a tendency for the returns between the average stock return and the index return to be different.

And that is the problem with using the average stock return as the hurdle for funds. Investors are not measured against the average stock return, they’re measured against the benchmark, typically the S&P 500. Most indexes are market cap weighted, meaning that the index return and the average stock return are generally different.

In the example below, we’ve taken the current S&P 500 constituents and calculated their return since the beginning of 2012 and compared that to an average return (Equal Weighted) and the actual return of the S&P 500. The S&P 500 over that period was up 136% vs 175% for the average stock (this isn’t a perfect analysis because the constituents in the portfolio changed over that time but it is an approximation).

The graph above shows the distribution of individual stock returns over that period. You can see the outliers that pull the average stock return (red line) up to a point where 63% of individual securities underperform the average of 176%. But the S&P 500 was up 136% (green line) over that period so only 51% of stocks underperformed the benchmark. Pretty much a coin flip.

We brought positive skew up with Andrew Wellington at Lyrical Asset Management. They have done some great analysis comparing the top 1000 stocks by market cap in the US to the S&P 500 each year going back to 1998.

Source: FactSet and Lyrical Asset Management

As you can see in the chart above, the average stock beating the S&P 500 index is a coin flip. For the past 20 years, the likelihood of any individual stock beating the S&P 500 in any given year is 50.2%. If I build random portfolios using the Top 1000 stocks in the US, there is a high likelihood that the portfolio return will be close to the S&P 500 return.

Some years are clearly better than others. ’98 and ’99 were horrible stock picking years. If you didn’t own the few stocks that had meteoric rises, you had a high likelihood of underperforming the S&P 500. ’01 and ’02 were good stock picking years. Over 60% of stocks beat the index.

What this means, is that any given fund’s batting average should be compared to the batting average of the universe of stocks compared to the benchmark. A 54% batting average in ’98 is heroic, in ’03, 54% is just inline. Take a look at 2017. It was the 3rd hardest stock picking environment in the last 20 years using this metric.

But what about other indices? Thankfully, our friend Julien Messias from Quantology Capital Management has done the analysis (1999-2014) comparing the S&P 500 and Russell 2000. Below are thoughts from Julien on the topic:

The Russell 2000 components returns exhibit a much more leptokurtic distribution (fat-tailed) than S&P 500, meaning that you have a huge part of the index’s components suffering from huge loss (or even bankruptcies), with an average of more than 60% of the components underperforming the index performance and 2% of the components with huge performance (more than 500% per year). The performance of the index is therefore pulled up by those latter 2%.

Assuming a stock-picker operates at random to choose its investment within the index universe, this means that his performance should be closer to the median performance of the components, than to the index performance itself. Therefore, given that the median performance is almost always lower than the index performance (see chart below), an investor in Russell 2000 securities is very likely to underperform and very unlikely to outperform.

The S&P 500 distribution is much more mean-centered, with very shallow/thin tails, meaning that the average stock picker is much more likely to generate a performance close to the index performance (graph from Lyrical AM) and less likely to underperform.

Source: Quantology CM

The Russell 2000 index more apparently displays the impacts of positive skew because it is less impacted by a contribution of a few very large companies. AAPL, MSFT, GOOG, AMZN make up 12.2% of the S&P 500 while the Russell 2000’s top 4 positions make up 1.7% of the index. The result is that the average of all stocks in the Russell 2000 is much closer to the Russell 2000 index return than the average of all stocks in the S&P 500 (recall the large difference from the 2012 to 2018 analysis that showed the S&P 500 return was 136% vs 175% average of all stocks).

This means that the index chosen as the benchmark for your fund has a profound impact on your ability to beat it. More specifically, the probability of beating the S&P 500 with a random portfolio is 50%, for the Russell 2000, it’s 42%.

There has been quite a bit of press regarding positive skew. It’s a great conversation but, for the average fund that is measured against the S&P 500, the impact is overblown. Almost every investor is compared against a benchmark. I recommend that you dig a layer into your benchmark and measure its positive skew, the likelihood of beating the average stock return, the likelihood of beating the index return, and compare your hit rate against the hit rate each year to know how difficult or easy it was for you on any given year.

April 06, 2018

Let’s play a game. In this game, there are 10 random poker chips in a bag. 9 of these chips will give you a return between -8% and +8% on the money that you bet. The 10th coin will give you a 100% return. The distribution of returns for this game has a positive skew.

If offered to put money down on this proposition you would take it because you would expect a 10% return if you could play the game over and over.

Now let’s add a wrinkle. Your goal isn’t just to make a positive return, you have to beat the bag. The bag puts 10% of their money on each chip and pulls them all. Voila, a 10% return. One last wrinkle, you can only pick one chip at a time.

How many times out of 10 would you beat the bag? Only 1 in 10. 90% of the time you would lose to the bag. It doesn’t matter if we expand the number of chips as long as the bag maintains the same positive skew (we could increase the to 100 chips and you get to pick 10, 100 chips and you pick 1000, etc.)

By now, you’ve probably guessed that the bag is the market, the chips are stocks, and you are, well, you. This is the game we play when trying to beat an index. True, you can be better than the market at figuring out the good chips but given that initial conditions for a random game means you lose 9 out of 10 times, it’s really hard to beat the market. Add fees and the likelihood of beating the market goes down even further.

Positive Skewness has gotten a decent amount of press over the past year because of the championing of JB Heaton who wrote a paper1 researching the impacts of positive skew on manager underperformance. Heaton’s paper is similar to research from Dr. Richard Shockley in 19982. See below for an article written by Bloomberg News on the topic.

Given that many of the conversations active managers have today revolve around active versus passive, “positive skew” should be top of mind. This is my push to increase awareness.

Given that active managers can’t change market skew, what should we do? We could measure skill in a different way. Let’s say I want to measure a manager skill. If I take all of the stocks of the markets they’re investing in and then randomly build 100,000 portfolios with the same number of securities as the manager. I can then plot where that manager falls on the distribution and give them a Z-Score for how far away from the norm they are. I could do the same thing for hedge funds by randomly buying and selling securities in the same universe as the investor.

I’m not saying that this excuses active managers from underperforming passive strategies, but it should at least be a more realistic assessment of their skill. My hope is that positive skew becomes just as common an explanation as fees when discussing active manager underperformance. Only by knowing the causes, will we be able to make changes that allow active managers to outperform.

2 “Why Active Managers Underperform the S&P 500: The Impact of Size and Skewness” published in the inaugural issue of the Journal of Private Portfolio Management. One of the original authors of the study is Richard Shockley.

March 12, 2018

Learn how to enhance your investment results in this great podcast from Ted Seides and his guests, Clare Flynn Levy from Essentia Analytics and Cameron Hight from Alpha Theory.

This conversation covers the founding of these two respective businesses, the mistakes portfolio managers commonly make, the tools they employ to help managers improve, and the challenges they face in broader adoption of these modern tools. The good news is the clients of Essentia Analytics and Alpha Theory have demonstrated improvement in their results after employing these techniques. If you ask Clare and Cameron, you may develop a whole new appreciation about the potential for active management going forward.

By creating a disciplined, real-time process based on a decision algorithm with roots in actuarial science, physics, and poker, Alpha Theory takes the guessing out of position sizing and allows managers to focus on what they do best – picking stocks.

In this podcast, you will learn how Alpha Theory allows Portfolio Managers convert their implicit assumptions into an explicit decision-making process.

To learn how this method could be applicable to your decision-making process: