No Price Discovery Then No Markets; A Reader’s Question

Has the meteoric rise of passive investing generated the “greatest bubble ever”?
The better we understand the baked-in biases of algorithmic investing, the closer we can come to answers.

The following article was originally published in “What I Learned This Week” on June 15, 2017. To learn more about 13D’s investment research, visit website. https://latest.13d.com/tagged/wiltw

In an article for Bloomberg View last week titled “Why It’s Smart to Worry About ETFs”, Noah Smith wrote the following prescient truth: “No one knows the basic laws that govern asset markets, so there’s a tendency to use new technologies until they fail, then start over.” As we explored in WILTW June 1, 2017, algorithmic accountability has become a rising concern among technologists as we stand at the precipice of the machine-learning age. For more than a decade, blind faith in the impartiality of math has suppressed proper accounting for the inevitable biases and vulnerabilities baked into the algorithms that dominate the Digital Age. In no sector could this faith prove more costly than finance.

The rise of passive investing has been well-reported, yet the statistics remain staggering. According to Bloomberg, Vanguard saw net inflows of $2 billion per day during the first quarter of this year. According to The Wall Street Journal, quantitative hedge funds are now responsible for 27% of all U.S. stock trades by investors, up from 14% in 2013. Based on a recent Bernstein Research prediction, 50% of all assets under management in the U.S. will be passively managed by early 2018.

In these pages, we have time and again expressed concern about the potential distortions passive investing is creating. Today, evidence is everywhere in the U.S. economy — record low volatility despite a news cycle defined by turbulence; a stock market controlled by extreme top-heaviness; and many no-growth companies seeing ever-increasing valuation divergences. As always, the key questions are when will passive strategies backfire, what will prove the trigger, and how can we mitigate the damage to our portfolios? The better we understand the baked-in biases of algorithmic investing, the closer we can come to answers.

Over the last year, few have sounded the passive alarm as loudly as Steven Bregman, co-founder of investment advisor Horizon Kinetics. He believes record ETF inflows have generated “the greatest bubble ever” — “a massive systemic risk to which everyone who believes they are well-diversified in the conventional sense are now exposed.”

Bregman explained his rationale in a speech at a Grant’s conference in October:
“In the past two years, the most outstanding mutual fund and holding- company managers of the past couple of decades, each with different styles, with limited overlap in their portfolios, collectively and simultaneously underperformed the S&P 500…There is no precedent for this. It’s never happened before. It is important to understand why. Is it really because they invested poorly? In other words, were they the anomaly for underperforming — and is it reasonable to believe that they all lost their touch at the same time, they all got stupid together? Or was it the S&P 500 that was the anomaly for outperforming? One part of the answer we know… If active managers behave in a dysfunctional manner, it will eventually be reflected in underperformance relative to their benchmark, and they can be dismissed. If the passive investors behave dysfunctionally, by definition this cannot be reflected in underperformance, since the indices are the benchmark.”

At the heart of passive “dysfunction” are two key algorithmic biases: the marginalization of price discovery and the herd effect. Because shares are not bought individually, ETFs neglect company-by-company due diligence. This is not a problem when active managers can serve as a counterbalance. However, the more capital that floods into ETFs, the less power active managers possess to force algorithmic realignments. In fact, active managers are incentivized to join the herd—they underperform if they challenge ETF movements based on price discovery. This allows the herd to crowd assets and escalate their power without accountability to fundamentals.

With Exxon as his example, Bregman puts the crisis of price discovery in a real- world context:

“Aside from being 25% of the iShares U.S. Energy ETF, 22% of the Vanguard Energy ETF, and so forth, Exxon is simultaneously a Dividend Growth stock and a Deep Value stock. It is in the USA Quality Factor ETF and in the Weak Dollar U.S. Equity ETF. Get this: It’s both a Momentum Tilt stock and a Low Volatility stock. It sounds like a vaudeville act…Say in 2013, on a bench in a train station, you came upon a page torn from an ExxonMobil financial statement that a time traveler from 2016 had inadvertently left behind. There it is before you: detailed, factual knowledge of Exxon’s results three years into the future. You’d know everything except, like a morality fable, the stock price: oil prices down 50%, revenue down 46%, earnings down 75%, the dividend-payout ratio almost 3x earnings. If you shorted, you would have lost money…There is no factor in the algorithm for valuation. No analyst at the ETF organizer—or at the Pension Fund that might be investing—is concerned about it; it’s not in the job description. There is, really, no price discovery. And if there’s no price discovery, is there really a market?”

We see a similar dynamic at play with quants. Competitive advantage comes from finding data points and correlations that give an edge. However, incomplete or esoteric data can mislead algorithms. So the pool of valuable insights is self-limiting. Meaning, the more money quants manage, the more the same inputs and formulas are utilized, crowding certain assets. This dynamic is what caused the “quant meltdown” of 2007. Since, quants have become more sophisticated as they integrate machine learning, yet the risk of overusing algorithmic strategies remains.

“It seems algos are programmed with a bias to buy. Individual stocks have risen to ludicrous levels that leave rational humans scratching their heads. But since everything always goes up, and even small dips are big buying opportunities for these algos, machine learning teaches algos precisely that, and it becomes a self-propagating machine, until something trips a limit somewhere.”

As Richter suggests, there’s a flip side to the self-propagating coin. If algorithms have a bias to buy, they can also have a bias to sell. As we explored in WILTW February 11, 2016, we are concerned about how passive strategies will react to a severe market shock. If a key sector failure, a geopolitical crisis, or even an unknown, “black box” bias pulls an algorithmic risk trigger, will the herd run all at once? With such a concentrated market, an increasing amount of assets in weak hands have the power to create a devastating “sell” cascade—a risk tech giant stocks demonstrated over the past week.

With leverage on the rise, the potential for a “sell” cascade appears particularly threatening. Quant algorithms are designed to read market tranquility as a buy-sign for risky assets—another bias of concern. Currently, this is pushing leverage higher. As reported by The Financial Times, Morgan Stanley calculates that equity exposure of risk parity funds is now at its highest level since its records began in 1999.

This risk is compounded by the ETF transparency-problem. Because assets are bundled, it may take dangerously long to identify a toxic asset. And once toxicity is identified, the average investor may not be able to differentiate between healthy and infected ETFs. (A similar problem exacerbated market volatility during the subprime mortgage crisis a decade ago.) As Noah Smith writes, this could create a liquidity crisis: “Liquidity in the ETF market might suddenly dry up, as everyone tries to figure out which ETFs have lots of junk and which ones don’t.”

J.P. Morgan estimated this week that passive and quantitative investors now account for 60% of equity assets, which compares to less than 30% a decade ago. Moreover, they estimate that only 10% of trading volumes now originate from fundamental discretionary traders. This unprecedented rate of change no doubt opens the door to unaccountability, miscalculation and in turn, unforeseen consequence. We will continue to track developments closely as we try and pinpoint tipping points and safe havens. As we’ve discussed time and again with algorithms, advancement and transparency are most-often opposing forces. If we don’t pry open the passive black box, we will miss the biases hidden within. And given the power passive strategies have rapidly accrued, perpetuating blind faith could prove devastating.

A Reader’s question that I post below so the many intelligent folks that read this can chip in their thoughts….

The part that confuses me the most is this:

From what I gather, Greenblatt typically calculates his measurement of normal EBITDA – MCX. He then puts a conservative multiple on this, typically 8 or 10 times EBITDA-MCX. He says higher quality companies may deserve 12x or more. He often says something like “this is a 10% cash return that is growing at 6% a year. A growing income is worth much more than a flat income”. He seems to do this on page 309-310 of the notes you sent me complete-notes-on-special-sit-class-joel-greenblatt_2.

My question is: Greenblatt’s calculation of earnings (EBITDA – MCX) only includes the maintenance portion of capital expenditure. The actual cash flow may be lower because of growth capex. Yet he is assuming a 6% growing income. It seems strange to me that he calculates the steady-state income (no growth capex. Only Maintenance capex), but he assumes that the income will grow. It seems like he is assuming the income will grow 6% but doesn’t incude the growth capex in his earnings calculation. Is it logical to assume that the steady-state earnings will grow, but not deducting the cost of the growth capex from the earnings?

11 responses to “No Price Discovery Then No Markets; A Reader’s Question”

I think what he is doing is this:
He is calculating earnings for the current year first. That would be the same whether or not he is investing funds for the future. If you deducted his capex you would punish the current earnings power because he is investing when in fact his investments are improving the future earnings power.
And it can be logical to assume earnings will grow if you invest funds wisely.
You need to separate earnings from cash flow and know when each one is important.

Now I have a question. How precise do you need to be when you try to understand how much earnings will rise. Is it sufficient to conclude that earnings are likely to rise and therefore justifying a higher multiple or do you think Greenblatt attempts to calculate very precisely exactly by how much?

When reading you can be a stock market genius I get the impression that he is happy to just accept a multiple that is superior to other stocks e.g. by comparing using value line.. but is that how he actually works?

My answer to the Greenblatt q would be it doesn’t matter as long as you have a margin of safety.

It is unwise to look for precision in such numbers. As long as the range of values can be estimated with the safety of the current market price being lower than the lowest or the median value Of the range then pull the trigger.

Hi
I think he is conservative, does not want to pay for growth and although he says that it grows by 6% the multiples he is using are for no growth situations. For non-moat businesses growth has zero value at most and might even have negative value (return on capital for business less than what you demand). For Moat businesses the risk in 1)determining the growth rate, 2) determining the growth period and determining the run-off rate of growth is very high. To be conservative it is easier and safer.

I’m not sure your interpretation is correct. Greenblatt often talks about different quality companies deserving higher or lower multiples. He explained his Moody’s investment where he says he paid 20x earnings, but it was actually worth 30x earnings. Greenblatt does seem to have some kind of calibration in his mind of higher growth/higher quality companies deserving higher multiples.

I think EBITDA-MCX multiple is just used as a common yardstick for comparison between companies as if they are not investing for growth but just continuing operations as is. This puts the comparison on equal grounds and becomes the base multiple before accounting for growth? Maybe after, you can probably adjust for growth.

In this case the only growth I could think of is inflation or the ability to raise prices without affecting demand (e.g. See’s Candies).

I am just a humble reader and will wait for more intelligent folks to answer.

I am no more “intelligent” than the recent commentators here, but I did sit in the back of Greenblatt’s Spec. Sits class for a few years, while some of the MBA students played video games on their laptops. (not everyone cares about investing).

If the following is still unclear, don’t hesitate to ask. I am traveling so I may not reply for a day or so.

Greenblatt uses EBITDA-MCX as a stripped down metric to compare “pre-tax owners’ earnings between companies. MCX or maintenance capital is what you need to MAINTAIN your business. If you run a motel, then every year you have to repaint some rooms, replace carpet, towels, etc. BUT if your cross-down neighbor in a similar price bracket puts in high speed WIFI, then you might have to INCREASE your MCX to MAINTAIN your competitive position. In other words, THINK about what TRUE MCX is through your knowledge of competitive economics and the industry.

Growth capex is a separate issue like when you open a new store. Growth within a franchise will bring a growth in real earnings, but investing in growth capex doesn’t automatically mean an increase in earnings. You borrow at 10% but increase cash flows by 8%, you are in tough circumstances. IF you remember only one thing, realize that GROWTH is not always the investor’s friend, only profitable growth.

Greenblatt would say he liked buying AMEX at 12 times earnings because he could see pension funds paying 14 to 16 times later since the business was high quality. He foresaw a high multiple or lower cost of capital for AMEX.

I honestly don’t know how he felt comfortable doing that (perhaps experience?)