Thursday, June 28, 2007

In the same issue of the Economist magazine I cited previously, there is an article about the valuation of currencies based on 13 quantitative models that Morgan Stanley developed. They found that the most overvalued currency (against the US dollar) is the New Zealand dollar, while the most undervalued currency is the Japanese Yen.

What about the Chinese Yuan that arouses much hoopla in Congress? The models found it to be almost exactly fairly valued.

Wednesday, June 27, 2007

There is an article about algorithmic trading in the latest issue of the Economist magazine, where it says that one-third of all stock trades in the US are due to algorithmic trading. This should not surprise us. What is more interesting is its mention of the electronically tagged news products that are coming out of Dow Jones and Reuters, which purportedly enable computers to buy or sell stocks immediately upon the release of a news item. The data suppliers regard these news products as some kind of secret high-tech weapons: "Dow Jones claims the business is so secretive that it cannot divulge details of customers." Is this hype justified?

Actually, to get a taste of news-driven trading, you don't need to pay a hefty fee to buy one of these products. You can just monitor the regularly scheduled economic news release (consumer confidence, new homes sales, crude inventories, etc.), trade the relevant futures, and proceed to make millions.

The fact that most of us who monitor these economic news releases haven't yet made our millions is an indication whether these news products will help you do the same. The information contained in the news is often difficult to interpret. Even the initial price reaction to the news may be wrong, leading to swift reversal after an apparent initial trend. And finally, what's wrong with scanning for sudden price movemenets, and then check for possible news to confirm that the price movement is due to the release of new information?

Friday, June 22, 2007

I have discussed in various articles trading the spreads between pairs of ETF’s, or between a basket of stocks against an ETF using cointegration technique. There is, however, a glaring omission, as I haven’t yet mentioned the classic statistical arbitrage strategy: pair-trading stocks.

There are pros and cons on applying cointegration to pair-trading stocks. On the pro side: because of the large number of stocks, we can enjoy a highly diversified portfolio that improves the validity of our results. Even if a number of spreads fail to cointegrate going forward, we can count on a larger number of spreads that still do. (For e.g. my USO-XLE spread fell apart, while GLD-GDX spread is still tightly cointegrated.) There are 2 main cons: 1) stocks are subject to various specific risks which may render our purely statistical model useless, especially in M&A situations. Therefore it is customary to remove such stocks from our portfolio when they are involved in special situations – however, by the time the news is public we may have incurred substantial loss already; also 2) because of the technique’s long history, it became known to many hedge funds and indeed students of finance, and therefore pair trading stocks has not been very profitable, especially in the period 2003-2005. Here I plotted the excess returns of the strategy as applied to US bank stocks from 20010102-20041231. (Excess returns means credit interest on margin balance is not included.)

Interestingly, when a strategy becomes too popular and less profitable, many traders start to abandon it, or at least reduce their trading capital invested in the strategy. After a while, its popularity decreases, and the profitability recovers! This life-cycle of strategies reveals itself as mean-reversion of strategies, on top of mean-reversion of stock prices. In our case, this strategy recovery starts in 2005, and is still in full-force. Here I plotted the excess returns of the strategy as applied to US bank stocks from 20050103 to 20070531:

The average annual excess return in 2005-now is about 7.7% (on one-side of capital), and the Sharpe ratio is 0.8. Since I have applied the technique on only one industry group, diversification is limited and therefore the Sharpe ratio is low. For the interested readers, they can attempt to apply this technique to more industry groups and perhaps generate a higher Sharpe ratio. Even with just one industry group, this trading strategy may be a good complement to a portfolio heavy on trend-following strategies and therefore require a reversal model to smooth out the returns.

I have started a model portfolio in my subscription area to demonstrate this strategy which will be updated daily around 3pm ET. Other details of the strategy will be detailed in an accompanying article there as well.

Tuesday, June 12, 2007

Some of you may remember that I preached about the uselessness of factor models in predicting short term return, and the unreliability of many exotic factors even for the long term. In particular, factor models are especially inaccurate in valuing growth stocks (i.e. stocks with low book-to-market ratio), as evidenced by such models' poor performance during the internet bubble. This is not surprising because most commonly used factors rely on historical sales or earnings measures to judge companies, while many growth stocks have very short history and little or no earnings to report. However, as pointed out recently by Barry Rehfeld in the New York Times, Professor Mohanram of Columbia University has devised a simple factor model that rely on 8 very convincing factors to score growth stocks. These factors are:

Normalized return on assets.

Normalized return on assets based on cash flow.

Cash flow minus net income. (i.e. negative of accrual.)

Normalized earnings variability.

Normalized sale growth variability.

Normalized R&D expenses.

Normalized capital spending.

Normalized advertising expenses.

By "normalized", I mean we need to standardize the numbers with respect to the industry median. To Prof. Mohanram's credit, he claims only that these factors will generate returns after 1 or 2 years, not the short-term returns that many traders expect factor models to deliver. The excess annual return based on buying the group of stocks with the highest score and shorting the group with the lowest score is a good 21.4%. Not only does the combined score generate good returns, but each individual factor also delivers good correlation with future returns, proving that the performance is not due to some questionable alchemy of mixing the factors. For example, it makes good intuitive sense that extra spending on R&D and advertising will boost future earnings for growth stocks.

Interestingly, Prof. Mohanram pointed out that most of the out-performance of the high-score stocks occur around earnings announcements. Hence for those investors who don't like holding a long-short portfolio for a full year, they can just trade during earnings season.

One caveat of this research is that it was based on 1979-99 data (at least for the preprint version that I read). As many traders have found out, strategies that work spectacularly in the 90's don't necessarily work in the last few years. At the very least, the returns are usually greatly diminished. In the future, I hope to perform my own research to see whether this strategy is still holding up with the latest data.