I'm curious what others think. Momentum is all behavioral and at least some part of value is behavioral. Do most people believe that as investors get more sophisticated and markets get more efficient that behavioral anomalies will go away? Personally, I believe they might shrink a smidge, but overall persist. For one, the risks and costs of correcting overpricing by shorting are way more than correcting underpricing. Also I think human behavior isn't changing substantially any time soon. Humans have a bit of a gambling instinct, will always overpay for growth, want to invest in the next big thing.

Yes I think they will persist - investors have had millennia to get more sophisticated and while the tools used have I'm not sure it is clear that the underlying behaviors of the individuals behind those tools are any better then previous generations.

I'd have to feel more comfortable that I knew the exact nature of these anomalies before answering such questions in any meaningful way. I feel the same way about the equity risk premium puzzle--that also appears to be some sort of anomaly, and it has lasted a long time, but I don't feel sure about its future. Some diminishing over time seems like a reasonable guess, but how much and when is not so easy to pin down.

I'd also note there might be some sort of "agency" problem here, where managers are behaving rationally in light of their own interests, even if not in the interests of their clients (who, to be fair, might be giving their money to those managers who do this). This sort of thing could significantly complicate any predictions.

To put this together, I think the best hope for significant change in favor of investors is things like regulatory reform. I am skeptical about human nature changing.

I don't think investors will get more sophisticated. Those that are really smart and interested will, but on average most probably won't. As livesoft said, people will die off. And most of those people won't spend their time reading financial history and so will eventually repeat the same types of mistakes once enough time has passed (say, once a generation, every 20 or 30 years). Emotions like greed and the fear of missing out will never go away.

So behavioral anomalies should persist. They may just have periods of underperformance you have to sit through when people jump on the bandwagon. For example, IMO small value will probably underperform in the short to medium term, but over a long enough period of time, the weak hands will sell and those that have the intestinal fortitude will probably be rewarded for their patience.

Ultimately, I suppose it's like asking what is the future of the human brain, how will it evolve, assuming it continues to evolve or at least evolve in ways similar to the past. And then the narrower question, how might that better control our behavior with money. I don't know the future, but for now, I'm inclined to bet on livesoft's "sucker."

Behavioral anomalies do not persist. They blink in and out of existence based upon nothing. So, no.

On the Bloomberg site: “A New Paper Just Took a Huge Shot at Some of the World's Hottest Investments”

Highlights from the article:

The researchers looked at a variety of factors, including momentum, value-versus-growth and ones based on trading frictions. According to their findings, nearly two-thirds of the market variations couldn’t be replicated 95 percent of the time. Even for significant anomalies -- such as price momentum and operating accruals -- the magnitudes often are far lower than reported. In other words, “the capital markets are more efficient than previously reported,” they write.

And:

The study concludes that researchers looking at anomalies should more closely connect their empirical work to existing economic theory. This would lessen the impact of data mining errors on their findings.

I guess magic formulas don’t exist.

I think sometimes appraisers look for magic formulas to quickly explain the market but nope. Magic formulas don’t work. It’s just plain old hard work of researching and analyzing the market.

Sometimes in the market, you can spot a trend, but sure enough, the trend suddenly goes missing. Underwriters prefer proven facts and look aghast at magic formulas.

If magic formulas exist, why don’t banks just have robot appraisers? Because the market doesn’t give one rip about your ridiculous formula. The only certainty with magic formulas in that they blow up in your face.

Lastly, I am sure people will persist in trying to hawk this stuff.

Information is more valuable sold than used. - Fischer Black (1938-1995)

That paper doesn't suggest that hundreds of anomalies have existed then gone away. It suggests out of the hundreds of published anomaly findings, most never existed in the first place, and were just an artifact of data mining.

I don't know enough to take a stand on that particular paper's methods, but I would tend to agree that until a result has been thoroughly tested out of sample and replicated, one should be skeptical.

The biggest behavioral anomaly has been that the average investor can outsmart the market. This investor ignores the costs of the super-smart strategy and thus ends up underperforming, or the investor misses the fact that the strategy wasn't all that super smart after all.

Chasing after something that would have worked in the past had the strategy been investible at zero cost is one dimension of this behavior.

NiceUnparticularMan wrote:I don't know enough to take a stand on that particular paper's methods, but I would tend to agree that until a result has been thoroughly tested out of sample and replicated, one should be skeptical.

I think the well-known factors such as small, value, momentum etc. have all been replicated out of sample, no?

NiceUnparticularMan wrote:I don't know enough to take a stand on that particular paper's methods, but I would tend to agree that until a result has been thoroughly tested out of sample and replicated, one should be skeptical.

I think the well-known factors such as small, value, momentum etc. have all been replicated out of sample, no?

So in some variations, yes. Part of what that paper is pointing out is that there has been a proliferation of published anomalies that loosely fall within these broader categories. The authors claim many of them can't be replicated in general, and this hollows out some categories more than others. Then they use their own sort of factor model (what they call the q model), to "explain" many of the remaining anomalies (I suspect that part will be somewhat controversial). And then there are some still left.

So yes, generally speaking some of the broader categories of factors have survived a lot of out of sample testing in some form, but there is ongoing debate about exactly how to specify them.

Personally, I am currently sticking with small and value, and quality screens and such to the extent I can basically get them for "free" in certain funds I like for their small and value characteristics anyway. I don't really try to figure out how to do much more than that (although I only rebalance annually, which is a sort of momentum play). And I am a bit skeptical of people who market really heavy quant-driven funds and want to charge you a bunch of money for such funds, because I am concerned about this issue of whether all that has been properly tested in this sense.

2500 years ago in India, a great sage said that people were motivated by three things: greed, aversion and ignorance. He also said that there will come a time (now?) that our business, academic and political leaders are not worthy of their positions.

Annual rebalancing ended up with a higher annualized return and lower annualized volatility than monthly. Various folks have attributed this to a momentum effect. However, it has also been pointed out it might work a lot less well for multiple asset class portfolios, in which case the better approach to capturing momentum through less frequent rebalancing might be tolerance bands. You can see an argument like that, including a reference to the Vanguard study, here:

The thing is, tolerance bands and other approaches like it might require more frequent information, even if they require less frequent action, which could significantly increase emotional stress and behavioral risk.

So, I like annual rebalancing as a VERY mild momentum play, because it also requires less frequent information.

Of course if you really want to minimize emotional stress and behavioral risk, you can just use an all-in-one fund with automatic rebalancing, and go do whatever you enjoy doing.

selftalk wrote:Human nature in aggregate hasn`t changed since the beginning of time. Why would it change now ?

A better question might be: Will human nature be less involved in pricing the market in the future?

Exactly. Great strides are being made in artificial intelligence, such as in deep learning. If a significant enough number of dollars are managed by AI systems, human-induced anomalies will be eaten in microseconds.

As our analysis has shown, the risk-adjusted returns are not meaningfully different whether a portfolio is rebalanced monthly, quarterly, or annually

I don't disagree with them--the small differences I identified really are not meaningful. That's a problem many have identified--to the extent this is a momentum play, it is such a crude one that it ends up being very ineffective.

selftalk wrote:Human nature in aggregate hasn`t changed since the beginning of time. Why would it change now ?

A better question might be: Will human nature be less involved in pricing the market in the future?

Exactly. Great strides are being made in artificial intelligence, such as in deep learning. If a significant enough number of dollars are managed by AI systems, human-induced anomalies will be eaten in microseconds.

rkhusky wrote:Exactly. Great strides are being made in artificial intelligence, such as in deep learning. If a significant enough number of dollars are managed by AI systems, human-induced anomalies will be eaten in microseconds.

Exactly how will they be "eaten in microseconds"?

If you've got a market full of self-learning bots capable of accurately valuing companies, then any company which is undervalued due to human behavioral reasons would likely be bought by the bots until they are properly valued. This is assuming the bots don't make the same behavioral errors that humans do, which I think is a fair assumption. There's no reason to think programmers will program their bots with their own behavioral errors. More likely the bots will be data-driven and self-learning, which will lead them to correct any such errors if they do occurr.

The sum total return of stock investors will continue to be that of the 'Total Market'.
There will continue to be people trying to garner something extra. People that are better at it, with advantageous positions, information, and marketing ability will be able to exploit those who aren't... people who believe the market is efficient will find this to be some sort of anomaly.

"To achieve satisfactory investment results is easier than most people realize; to achieve superior results is harder than it looks." - Benjamin Graham

rkhusky wrote:Exactly. Great strides are being made in artificial intelligence, such as in deep learning. If a significant enough number of dollars are managed by AI systems, human-induced anomalies will be eaten in microseconds.

Exactly how will they be "eaten in microseconds"?

If you've got a market full of self-learning bots capable of accurately valuing companies, then any company which is undervalued due to human behavioral reasons would likely be bought by the bots until they are properly valued. This is assuming the bots don't make the same behavioral errors that humans do, which I think is a fair assumption. There's no reason to think programmers will program their bots with their own behavioral errors. More likely the bots will be data-driven and self-learning, which will lead them to correct any such errors if they do occurr.

On the contrary, I would think that there would be every reason to think that programmers will program their bots with their own behavioral errors. I don't see how it couldn't be the case.

Random Walker wrote:I'm curious what others think. Momentum is all behavioral and at least some part of value is behavioral. Do most people believe that as investors get more sophisticated and markets get more efficient that behavioral anomalies will go away? Personally, I believe they might shrink a smidge, but overall persist. For one, the risks and costs of correcting overpricing by shorting are way more than correcting underpricing. Also I think human behavior isn't changing substantially any time soon. Humans have a bit of a gambling instinct, will always overpay for growth, want to invest in the next big thing.

Dave

I would not call Momentum investing an anomaly. It is a conscious, planned activity. There are computers programmed to automatically make trades based on momentum. I don't see anyway this is going away.

rkhusky wrote:Exactly. Great strides are being made in artificial intelligence, such as in deep learning. If a significant enough number of dollars are managed by AI systems, human-induced anomalies will be eaten in microseconds.

Exactly how will they be "eaten in microseconds"?

If you've got a market full of self-learning bots capable of accurately valuing companies, then any company which is undervalued due to human behavioral reasons would likely be bought by the bots until they are properly valued. This is assuming the bots don't make the same behavioral errors that humans do, which I think is a fair assumption. There's no reason to think programmers will program their bots with their own behavioral errors. More likely the bots will be data-driven and self-learning, which will lead them to correct any such errors if they do occurr.

On the contrary, I would think that there would be every reason to think that programmers will program their bots with their own behavioral errors. I don't see how it couldn't be the case.

rkhusky wrote:Exactly. Great strides are being made in artificial intelligence, such as in deep learning. If a significant enough number of dollars are managed by AI systems, human-induced anomalies will be eaten in microseconds.

Exactly how will they be "eaten in microseconds"?

If you've got a market full of self-learning bots capable of accurately valuing companies, then any company which is undervalued due to human behavioral reasons would likely be bought by the bots until they are properly valued. This is assuming the bots don't make the same behavioral errors that humans do, which I think is a fair assumption. There's no reason to think programmers will program their bots with their own behavioral errors. More likely the bots will be data-driven and self-learning, which will lead them to correct any such errors if they do occurr.

On the contrary, I would think that there would be every reason to think that programmers will program their bots with their own behavioral errors. I don't see how it couldn't be the case.

I don't know anything about this so I'm just speculating. "They find patterns in data". My guess is that given a random set of data, they will find patterns in that too. Back in my day, "the experts" claimed that there was an error in every 100 lines of scientific code. And I've seen this too. But the codes still ran and planes still fly. Interesting.

rkhusky wrote:Exactly. Great strides are being made in artificial intelligence, such as in deep learning. If a significant enough number of dollars are managed by AI systems, human-induced anomalies will be eaten in microseconds.

Exactly how will they be "eaten in microseconds"?

If you've got a market full of self-learning bots capable of accurately valuing companies, then any company which is undervalued due to human behavioral reasons would likely be bought by the bots until they are properly valued. This is assuming the bots don't make the same behavioral errors that humans do, which I think is a fair assumption. There's no reason to think programmers will program their bots with their own behavioral errors. More likely the bots will be data-driven and self-learning, which will lead them to correct any such errors if they do occurr.

On the contrary, I would think that there would be every reason to think that programmers will program their bots with their own behavioral errors. I don't see how it couldn't be the case.

NibbanaBanana wrote:On the contrary, I would think that there would be every reason to think that programmers will program their bots with their own behavioral errors. I don't see how it couldn't be the case.

Do you think the programmers behind Deep Blue programmed their own chess strategies into it? Were they better at chess than Kasparov? Or the programmers of AlphaGo? Better than the world champion of Go? If we can make AIs better than any human at Chess and at Go (and Jeopardy!), why could we not make AIs better than humans at investing?

NibbanaBanana wrote:On the contrary, I would think that there would be every reason to think that programmers will program their bots with their own behavioral errors. I don't see how it couldn't be the case.

Do you think the programmers behind Deep Blue programmed their own chess strategies into it? Were they better at chess than Kasparov? Or the programmers of AlphaGo? Better than the world champion of Go? If we can make AIs better than any human at Chess and at Go (and Jeopardy!), why could we not make AIs better than humans at investing?

Because chess and go are complete and perfect information games it really has no similarity at all to what we can expect from AI in investing. Even poker, which is complete and imperfect information game, where AI is just getting interesting, is so vastly different from valuing investments (which ultimately rests on predicting the future) that I don't think it tells us much about how AI will ultimately compete in the stock picking arena.

In other words, it is relatively easy to program an AI to avoid your mistakes in Chess and Go because they are just computational limitations of your own, a little bit more difficult in poker and vastly more complex and possibly unsolvable in investing.

avalpert wrote:In other words, it is relatively easy to program an AI to avoid your mistakes in Chess and Go because they are just computational limitations of your own, a little bit more difficult in poker and vastly more complex and possibly unsolvable in investing.

First of all, Go could not be won with the same method as Chess. More computational power was not enough. It was new progress in machine learning that made the win possible. Second, the blog post talks a lot about how intuitive Go is. It's a lot about intuition and not about calculation. This intuition can be learned by an AI, to a better degree than a human. Third, remember that Watson learned to win at Jeopardy and is now being used to diagnose disease. These are not computational challenges. They are tasks that require understanding language and intelligently searching through information and finding the answer to specific questions. It's all about learning. And the number of possible games of Go are around 10^761 (there are 10^80 atoms in the universe). It's insanely complex and unsolvable with raw computation, even if you had a computer the size of ten billion universes.

I see absolutely no reason why investing should be so difficult it cannot be done better with the machine learning capabilities of AlphaGo combined with the information search and understanding capabilities of Watson, or the next generations of these programs.

Oh, and the previous generation of Go-playing AIs used Monte Carlo simulations to run lots of future scenarios and look at which moves were generally more successful. AKA trying to predict the future. And AlphaGo is orders of magnitude more sophisticated.

Last edited by Ari on Fri May 12, 2017 2:06 pm, edited 1 time in total.

If you've got a market full of self-learning bots capable of accurately valuing companies, then any company which is undervalued due to human behavioral reasons would likely be bought by the bots until they are properly valued. This is assuming the bots don't make the same behavioral errors that humans do, which I think is a fair assumption. There's no reason to think programmers will program their bots with their own behavioral errors. More likely the bots will be data-driven and self-learning, which will lead them to correct any such errors if they do occurr.

On the contrary, I would think that there would be every reason to think that programmers will program their bots with their own behavioral errors. I don't see how it couldn't be the case.

I don't know anything about this so I'm just speculating. "They find patterns in data". My guess is that given a random set of data, they will find patterns in that too. Back in my day, "the experts" claimed that there was an error in every 100 lines of scientific code. And I've seen this too. But the codes still ran and planes still fly. Interesting.

I have no technical understanding of deep learning or AI, but it's fascinating stuff and I’ve often wondered how it all might eventually (appears there's a long way to go in any case) apply to investing, specifically to the risk-averse or overconfident behavior of the average investor. There are many good articles on deep learning geared to the layman and here are just two, the second somewhat at odds with the first:

Purely behavioral premia should not persist. Risk premia should persist.

Investors should be rewarded for buying and holding riskier assets. Stocks will beat bonds in the long run. Small value stocks will outperform in the long run. In some sense, these risk premia are behavioral. People don't hold high return assets because of risk, even if they'll offer significantly greater returns over a long investment horizon (such as an investor accumulating for retirement). People don't like losing 50%+ of their holdings, even though there's a good chance that the portfolio will recover and then some.

I think that factors which have no risk explanation will be arbitraged away. Momentum, for example, will lose its luster once AI traders learn to accurately value equities. He entire basis for momentum is that the market underestimates the magnitudes of changes in valuation. Falling companies should fall further, rising companies should rise higher. I don't see major risk here. Just seems like a market inefficiency that should be corrected.

People are still betting at race tracks and losing. Is this much different, the human behavior ? I just can`t understand why investors and speculators in general SIMPLY cannot seem to accept the Total Market Returns and be satisfied reaching their financial goals with that. These people devise all kind of ways to get immediate gratification and usually end up stumbling and thus failing to get to their financial freedom. Reaching too far can get you to go over the cliff. Ask around and you`ll see if the people have been honest.

slowmoney wrote:Behavioral anomalies do not persist. They blink in and out of existence based upon nothing. So, no.

On the Bloomberg site: “A New Paper Just Took a Huge Shot at Some of the World's Hottest Investments”

Highlights from the article:

The researchers looked at a variety of factors, including momentum, value-versus-growth and ones based on trading frictions. According to their findings, nearly two-thirds of the market variations couldn’t be replicated 95 percent of the time. Even for significant anomalies -- such as price momentum and operating accruals -- the magnitudes often are far lower than reported. In other words, “the capital markets are more efficient than previously reported,” they write.

And:

The study concludes that researchers looking at anomalies should more closely connect their empirical work to existing economic theory. This would lessen the impact of data mining errors on their findings.

I guess magic formulas don’t exist.

I think sometimes appraisers look for magic formulas to quickly explain the market but nope. Magic formulas don’t work. It’s just plain old hard work of researching and analyzing the market.

Sometimes in the market, you can spot a trend, but sure enough, the trend suddenly goes missing. Underwriters prefer proven facts and look aghast at magic formulas.

If magic formulas exist, why don’t banks just have robot appraisers? Because the market doesn’t give one rip about your ridiculous formula. The only certainty with magic formulas in that they blow up in your face.

Lastly, I am sure people will persist in trying to hawk this stuff.

I just read that paper titled "Replicating Anomalies" - the research article by Hou, Xue, and Zhang is here: http://www.nber.org/papers/w23394
Unfortunately, it's gated, but Chen Xue has a working paper version on his website (https://sites.google.com/site/xuecx2013/research, look under "working papers"). I would be a bit circumspect about their preliminary findings, though, because the paper has not yet undergone peer review. A main concern is that that it splices together many data sets to try and test these hundreds of premia, which means it has inconsistent sample size and hundreds of variables to consider in the analyses. This is likely to undergo significant revision (and may possibly reach a more qualified conclusion) in order to be published.

The elevator pitch: with so many financial anomalies being "discovered," many are likely to be false positives due to the nature of statistical analysis, and many more disappear once they are known and published. That said, as long as trading is driven by human agents, we are unlikely to see the EMH born out in its theoretical form.