September 17, 2007

Misleading Recession Probability: Kedrosky Title Switch

Last week Paul Kedrosky at Infectious Greed, one of our featured blogs, wrote a short piece on recession probability with the following title:

"Probability of U.S. Recession in 2008"

He wrote about the recent Wall Street Journal survey of economists, noting that the overall estimate of a recession had increased and the range of probabilities was extreme. His key comment:

Three-quarters of the 52 economists surveyed put the odds at or above 30%, but the range was gigantic from 5% to 90%.

This obviously means that one-quarter of economists see the probability as lower than 30%, but we have no quarrel with the overall conclusion that the (planned) slowing of the economy has increased the chance of a recession, probably into the 30% range.

Paul goes on to cite the recession probability from Intrade, a prediction market. We think prediction markets are useful and agree with Paul that Intrade is the best, but care is required in using the information. The 2008 recession contract has not yet attracted much interest and trading is thin. You can see the order book here. As we write this, the inside market shows a 52 bid for 16 contracts and a 58 offer of 9 contracts. A contract settles at either zero or 100, so the price can be interpreted as the probability of the outcome on a percentage scale. The contracts settle for a full value of $10. This means that the best current bidder is risking about $83. Looking only at orders already in the book, someone with $2000 could send the price to 30 or to 90. Presumably this would attract new orders, but the point is clear. This contract would be more meaningful if there were a tighter and deeper market.

We had marked Paul's original article for comment because of the disparity between the predictions of economists and non-economists, in this case the bettors at Intrade, a theme we described discussed here.

Misleading Headline

Much to our surprise, this morning's email alert from Seeking Alpha showed the same article with the following incorrect headline:

"Economists: Probability of U.S. Recession in 2008 Almost 60%"

The editors at Seeking Alpha are very good. On articles they pick up from "A Dash" their headline is often different and usually better than our original. In this case, however, the headline is an unfortunate and misleading error. Paul Kedrosky is a much respected and popular writer, so the story will get a lot of attention. How many readers will notice that the description in the text does not match the headline?

I apologize for double-dipping, but I realized I needed to flesh out the answer a bit more.

If one says the recession odds are 40% this year, they mean a 60% chance of no recession. It follows that, while no one non-recession disproves their model, a streak of 7 consecutive independent years in which they say "40% odds" and no recession occurs gives us 97.2% confidence that their model is incorrect.

If one says that "conditions are not favorable for stocks" and stocks go up, that once occurrence doesn't disprove their model. However, if they say that every quarter for four consecutive years, then one would expect, if their model were correct, that the distribution of good and bad quarters in the last 16 quarters would be statistically significantly worse than expected given the last few decades of quarterly stock market experience. If, indeed, the last 16 quarters have been significantly better than average, one has some statistical proof that the model in question is flawed.

If one manages money according to a statistical model, one is making predictions.

"The tough question is after how many games of being wrong do you reevaluate your model."

It's not that tough a question. If the prediction is recession, what are the odds of a recession in a given year? 20%? OK, then the benchmark for accuracy is 80%, since any fool can say "no recession" every year. Do a binomial approximation test to see if the accuracy level reached by the predictor is statistically significantly different from 80%. Voila!

In the case of a fund manager who has outperformed his index for an 8-year cumulative period, but outperforms only 4 specific years (falling short in 4 other years), one should rightfully say "what's the pattern?" If all four years of out(under) performance fall in streaks, streaks that correspond to changing market conditions, is it fair to say that the manager's predictive model is flawed? That distribution is approaching significance, but not quite there yet ... unless one evaluates outpeformance when the benchmark return is positive or negative, and then the pattern is significant.

Putting money on something implies a prediction, and further, some amount of confidence in that prediction.

Thank you for removing the 2 paragraphs, and thanks for the link to your previous post on "forecasting unlikely events". I think you hit the nail on the head with this:

"Let us suppose that an expert put the chances of the big loss at 40%, but the home team actually won the game! Was the expert wrong? Not necessarily. We cannot tell from a single game. The odds might well have been 40%. It would take many games of similar circumstances for us to judge the accuracy of the prediction.

Briefly put, our experts would never predict a three-run loss in a specific game, although their probability estimates would reflect the specific circumstances."

I basically made this exact same point on another blog with respect to making predictions. The tough question is after how many games of being wrong do you reevaluate your model.

I agree that you can't throw the baby out with the bath water, and missing just the 2001 recession isn't proof experts are idiots, but I would love to see a comprehensive study of numerous forecasts.

I'm going off memory here, but I think it was Dreman (notable and highly successful value investor) who completed a study of sell-side analyst estimates and concluded they were dismal, and not much better then random predictions.

"We had marked Paul's original article for comment because of the disparity between the predictions of economists and non-economists, in this case the bettors at Intrade, a theme we described discussed here."

I'm not sure it is even worthwhile to spend alot of time on whether we are entering a recession soon regardless of whether it is journalists making proclamations or "experts".

I recently read a note on recessions that indicated that Bernanke and 90% of economists missed forecasting the 2001 recession. They all forecasted slowing growth and no recession. So much for the experts.

I completely agree that it is dangerous to attribute to much credibility to journalists and bloggers with questionable knowledge or experience. However, I think it could be just as dangerous to attribute credibility to "experts" whose track records may not be much better. My own view is it almost always makes more sense to evaluate the argument and analysis on its own merits regardless of who is making it.