Thursday, December 24, 2009

One of the most important factors in statistical arbitrage pairs trading is the selection of the paired instruments. We can use basic heuristics to guide us, such as grouping stocks by industry in the anticipation that stocks with similar fundamental characteristics will share factor risk and tend to exhibit co-movement. But this still leaves us with potentially thousands of combinations. There are some statistical techniques we can use to quantify the tradeability of a pair: one approach is to calculate the correlation coefficient of each pair's return series. Another is to consider cointegration measures on the ratio of the prices, to see if it remains stationary over time.

In this article I briefly summarise the alternative approaches and apply them to a universe of stock pairs in the oil and gas industry. To measure how effective each measure is in real world trading, I back test the pairs using a simple means reversion system, then regress the generated win rate against the statistical results. Some basic insights emerge as to the effectiveness of correlation and cointegration as tools for selecting candidate pairs.

Please visit http://www.paulfarrington.com/research/Selecting%20tradeable%20pairs.htm for details of my methodology and results.

Friday, December 18, 2009

1. Conference on 'Computational Topics in Finance', February 19/20, 2010, National University of Singapore. The topics will include using R/Rmetrics in finance, but the conference is by no means confined to R. See http://www.rmetrics.org/.

2. Consulting position (6-month renewable contract) available at a major Canadian bank in Toronto: research in various mathematical algorithms used for pricing of interest rate derivative instruments like swaps, caps, swaptions, FRAs. Please contact their recruiter at http://www.linkedin.com/pub/kevin-p-w-wang/6/899/29a.

Sunday, December 06, 2009

It is worrisome when not one but two eminent economists denounced financial speculation as "harmful human activities" in the short space of 2 weeks. (See Paul Krugman's column here and Robert Frank's here.) It is more worrisome when their proposed cure to this evil is to apply a financial transaction tax to all financial transactions.

Granted, you can always find this or that situation when financial speculation did cause harm. Maybe speculation did cause the housing bubble. Maybe speculation did cause an energy price bubble. In the same vein, you can also argue that driving is a harmful human activity because cars did cause a few horrific traffic accidents.

No, we can't focus on a few catastrophes if we were to argue that financial speculation is harmful. We have to focus on whether it is harmful on average. And on this point, I haven't seen our eminent economists present any scientific evidence. On the other hand, as an ex-physicist and an Einstein-devotee, I can imagine some thought experiments (or gedankenexperiment as Einstein would call them), where I can illustrate how the absence of financial speculation can clearly be detrimental to the interests of the much-beloved long-term investors. To make a point, a gedankenexperiment is usually constructed so that the conditions are extreme and unrealistic. So here I will assume that the financial transaction tax is so onerous that no hedge funds and other short-term traders exist anymore.

Gedankenexperiment A: Ms. Smith just received a bonus from her job and would like to buy one of her favorite stocks in her retirement account. Unfortunately, on the day she placed her order, a major mutual fund was rebalancing its portfolio and had also decided to shift assets into that stock. In the absence of hedge funds and other speculators selling or even shorting this stock, the price of that stock went up 40% from the day before. Not knowing that the cause of this spike was a temporary liquidity squeeze, and afraid that she would have to pay even more in the future, Ms. Smith paid the ask price and bought the stock that day. A week later, the stock price fell 45% from the peak after the mutual fund buying subsided. Ms. Smith was mortified.

Gedankenexperiment B: Mr. Smith decided that the stock market is much too volatile (due to the lack of speculators!) and opted to invest his savings into mutual funds instead. He took a look at his favorite mutual fund's performance, and unfortunately, its recent performance seemed to be quite a few notches below its historical average. The fund manager explained on her website that since her fund derived its superior performance from rapidly liquidating holdings in companies that announced poor earnings, the absence of liquidity in the stock market often forced her to sell into an abyss. Disgusted, Mr. Smith opted to keep his savings in his savings account.

Of course, our economists will say that the tax is not so onerous that it will deprive the market of all speculators (only the bad ones!?). But has anyone studied if we impose 1 unit of tax, how many units of liquidity in the marketplace will be drained, and in turn, how many additional units of transaction costs (which include implicit costs due to the increased volatility of securities) would be borne by an average investor, who may not have the luxury of submitting a limit order and waiting for the order to be filled?

Friday, November 27, 2009

When I was growing up in the trading world, high Sharpe ratio was the holy grail. People kept forgetting the possibility of "black swan" events, only recently popularized by Nassim Taleb, which can wipe out years of steady gains in one disastrous stroke. (For a fascinating interview of Taleb by the famous Malcolm Gladwell, see this old New Yorker article. It includes a contrast with Victor Niederhoffer's trading style, plus a rare close-up view of the painful daily operations of Taleb's hedge fund.)

Now, however, the pendulum seems to have swung a little too far in the other direction. Whenever I mention a high Sharpe-ratio strategy to some experienced investor, I am often confronted with dark musings of "picking up nickels in front of steamrollers", as if all high Sharpe-ratio strategies consist of shorting out-of-the-money call options.

But many high Sharpe-ratio strategies are not akin to shorting out-of-the-money calls. My favorite example is that of short-term mean-reverting strategies. These strategies not only provide consistent small gains under normal market conditions, but in contrast to shorting calls, they make out-size gains especially when disasters struck. Indeed, they give us the best of both worlds. (Proof? Just backtest any short-term mean-reverting strategies over 2008 data.) How can that be?

There are multiple reasons why short-term mean-reverting strategies have such delightful properties:

Typically, we enter into positions only after the disaster has struck, not before.

If you believe a certain market is mean-reverting, and your strategy buy low and sell high, then of course you will make much more money when the market is abnormally depressed.

Even in the rare occasion when the market does not mean-revert after a disaster, the market is unlikely to go down much further during the short time period when we are holding the position.

"Short-term" is indeed the key to the success of these strategies. In contrast to the LTCM debacle, where they would keep piling on to a losing position day after day hoping it would mean-revert some day, short-term traders liquidate their positions at the end of a fixed time period, whether they win or lose. This greatly limits the possibility of ruin and leaves our equity intact to fight another day in the statistical game.

So, call me old-fashioned, but I still love high Sharpe-ratio strategies.

Wednesday, November 04, 2009

I have learned some years ago that ETF's are strange and wonderful creatures. Simple, long-only mean-reverting strategies that work very well on ETF's, won't work on their component stocks. (Check out a nice collection of these strategies in Larry Connors' book "High Probability ETF Trading". He has also packaged these strategies into a single indicator, the ETF Power Ratings, on tradingmarkets.com.) Simple pair trading strategies like the one I discussed in my book, also work much more poorly on stocks than on ETF's. Why is that?

Well, one obvious reason is that, as Larry mentioned in his book, ETF's are not likely to go bankrupt (with the notable exception of the triple-leveraged ETF's, as I explained previously), because a whole sector or country is not likely to go bankrupt. So you can pretty much count on mean-reversion if you are on the long side.

Another obvious reason is that though there are news which will affect the valuation of a whole sector or country, these aren't as frequent or as devastating as news affecting individual stocks. And believe me, news is the biggest enemy of mean-reversion.

But finally, I believe that the capital weightings of the component stocks also play a part in promoting mean-reversion. Typically, weighting of a component stock increases with its market capitalization, though not necessarily linearly. Perhaps large-cap stocks are more prone to mean-reversion than small-cap stocks? But more intriguingly, can we not construct a basket of stocks, with custom-designed weightings, with the objective of optimizing its short-term mean-reversion property? I (and others before me) have done something similar in constructing a basket of stocks that cointegrate best with an index. Can we not construct a basket that is simply stationary (with perhaps a constant drift)?

Now, perhaps you will agree with me that ETF's are strange and wonderful creatures.

Sunday, October 11, 2009

Let me talk about a topic that is far more mundane than the usual high-brow theoretical discussions of strategies and algorithms, but that has no less long-term impact on the bottom line: what is the best office environment for research and execution of quantitative trading strategies?

I have worked in different office environments before, so I feel qualified to offer an informed opinion.

At Morgan Stanley, I huddled over a desk that is semi-partitioned from the rest of the office: nobody could see or bother me unless I or they stood up. At Credit Suisse, I shared an office with 2 other prop trading colleagues, one of whom was prone to freely sharing his opinion on various current affairs with his officemates. (On the other hand, he complained my biting an apple for lunch was too loud for him.) At Maple, a hedge fund in New Jersey, I shared an office with about 100 other colleagues on the trading floor, many of whom were prone to same opinion-sharing temptation.

Here at my own firm, I sit in solitude (except for my cat) in my basement office, my beloved classical FM streaming over the internet, my desktop electronically connected to my partner in our Chicago office, our trading servers at Amazon and elsewhere, and other clients and partners around the world, but unmolested throughout the day unless I voluntarily pick up the phone or answer an email or instant message.

Can you guess which environment is the one I find the most productive? Which one has the least stress? And which one contributes most to the bottom line of my employers/partners/clients?

(Hint 1: read Timothy Ferriss' book The 4-Hour Workweek. This guy checks his email only once a week.)

Sunday, September 20, 2009

I confess I don't know much about flash orders, not being one of the Big Boys on the Street, until I read that the SEC is banning them. (For a clear diagrammatic explanation of flash orders, see here. For a refutation of some of the myths and misunderstanding surrounding flash orders, see here.)

It seems to me that flash orders can be understood as "request for liquidity" issued to various potential market makers/liquidity providers, not unlike the usual "request for quotes" (RFQ) common in other industries. They are issued when there is not enough liquidity on a specific exchange to satisfy an investor's need, and they ultimately benefit investors by lowering their transaction costs. The fact that high frequency traders are able to make lots of money by providing this liquidity is besides the point. Liquidity providers are supposed to make money by providing liquidity!

Some people, including Senator Charles Schumer and this New York Times op-ed, believe that flash orders are akin to front-running, a clearly illegal trading activity. But they are wrong. Front-running means that if you know someone is going buy a stock, you step in front of themand buy it cheaply first, hoping to sell it to this slower buyer at a higher price. In the case of flash orders, the high frequency traders are instead selling this stock to the original investor, often at a lower price than available elsewhere and thus benefiting this investor, hoping that the prices will come down in the future after this liquidity need subsides. This is manifestly not illegal. This is what a market is built for!

Another way to understand that flash orders are not at all front running is that anybody, including you and me, are free to put in limit orders at the same price as those of the high frequency traders, way ahead of time, in a specific exchange, and become liquidity providers ourselves. You don't have to wait for a "request for liquidity" before doing so. And presumably you will reap the same benefits as the high frequency traders. You are not taking any additional risks over the HF traders either, since if no requests for liquidity ultimately arrive, you are not any worse off for wear. You cannot begrudge the profits of the HF traders just because you didn't put the limit orders in place beforehand!

Maybe there are some other angles which I miss which can convince me that flash orders are evil. But until my kind readers convince me otherwise in the comments section, I will regard this piece of legislation as another SEC attempt at demagoguery.

Friday, September 11, 2009

It occurs to me that the only way in which a trader can become more than a completely selfish, self-enriching, narcissistic person is to trade well enough so that you can manage other people's money and thus saving these investors from crooks and charlatans (provided you are convinced you are not a crook and charlatan yourself).

Other traders have advanced other arguments in favor of trading. But I am not convinced by them.

They say that we provide liquidity to other long-term investors who may need to liquidate their investments. But then, this applies only to mean-reversal strategies. Momentum strategies take away liquidity from the market, and in some cases exacerbating price bubbles. Certainly not something your grandma would approve.

Others argue that momentum strategies help disseminate information about companies through quick price movements. But can't we just watch Bloomberg or CNBC? Do we really need some devious insiders to convey that information to the rest of us through price movements?

No, I think that independent trading should serve only one purpose (besides short-term self-sustenance): as training and preparation to become a fund manager. Once you graduated from independent trading, you then enter into the grand contest among all fund managers to see who can best serve and protectinvestors' assets, (and be rewarded according to your standing in this contest.)

I know, this is the idealistic way to look at things. Serving and protecting seem to be what policemen should be doing, not traders. But as in quantitative trading, I think it helps one becomes more successful in one's activities by having a simple guiding principle or model. And it doesn't hurt that in this case, the principle would also be conscience-nourishing!

Wednesday, September 02, 2009

Author Malcolm Gladwell, in his fascinating bestseller "Outliers: The Story of Success", cites neurological research showing that "10,000 hours of practice is required to achieve the level of mastery associated with being a world-class expert." This seems to apply across many different types of experts, whether they are "writers, ice skaters, concert pianists, chess players ... Even Mozart ... couldn't hit his stride until he had his ten thousand hours in".

Reflecting on my own experience, I have become consistently profitable only after 4 years of actual trading (research alone doesn't count -- real money need to be at risk.) So while the number of hours may not be exactly 10,000, the order of magnitude is about right.

So if your trading has not been profitable, ask yourself this: "Have I traded 10,000 hours yet?"

Friday, August 21, 2009

Paul Teetor, who guest-blogged here about seasonal spreads, recently wrote an article about how to test for cointegration using R. Readers who don't want to pay for a copy of Matlab should find this free alternative with similar syntax quite interesting.

Friday, August 14, 2009

I have given a 2-part interview (here and here) on the various nuances of backtesting on tradingmarkets.com. Most of the ideas have been covered in my book, but it does serve as a summary of what I consider to be the most important issues.

For those of you who are interested, I may be giving a workshop on general techniques in backtesting in London as well, in addition to my pairs trading workshop. Additional details will be available on epchan.com at a later date.

Monday, July 27, 2009

Triple leveraged ETFs marketed by Direxion have been all the rage lately. The fund management company says that they do not recommend buying and holding these ETFs. But is there any mathematical justification for this caution?

Before I answer this, it is interesting to note that these ETFs (e.g. BGU is 3x Russell 1000, TNA is 3x Russell 2000) are managed as constant rebalanced portfolios, a concept I discussed before. In other words, the fund manager has to sell stocks (or futures) when there is a loss, and buy stocks (or futures) when there is a gain in the market value of the portfolio, in order to maintain a constant leverage ratio of 3. This is also identical to what Kelly formula would prescribe, a methodology discussed extensively in my book, if the optimal leverage f were indeed 3.

However, the optimal f for such market indices are quite a bit lower than 3. Both Russell 1000 and 2000 have f at about 1.8. This means that since the funds are leveraged at 3, there is a real possibility that sustained losses could ruin the funds (i.e. NAV going to zero unless new capital is injected, which, er..., reminds me of a Ponzi scheme). So I would argue that not only should an investor not hold these funds for the long term, the funds themselves should not be leveraged at this level. Otherwise, it is a disaster waiting to happen.

Friday, July 17, 2009

For readers who do not want to pay for a commercial Matlab2IB API, Max Dama has put together a free alternative. Domenic has provided some additional sample Matlab codes for trading.

A user of the commercial product that I previously mentioned reports that "My problem with the matlab2ib product was that it did not have a function for all the Active X methods. ( for example the Market Scanners, Real time Bars and Fundamental Data methods are missing). I also had issues when I tried to steam in trades data(I'm not sure if the matlab2ib product allows you to even do this?)." Apparently Max's API has included these methods, though I have not personally tried them.

Monday, June 29, 2009

You can find an interview of me in the July 2009 issue of Technical Analysis of Stocks & Commodities magazine. I mentioned in that interview and also in my book that I believe stop loss should only be applied to momentum strategies but not to mean-reverting strategies. I explained my reasoning better in my book than in the interview, and so I will paraphrase the explanation here.

In algorithmic trading, it is reasonable and intuitive that we should always make use of the latest information in determining whether we should enter into a position, whether that information is price, news, or some analysis. Let's call this the Principle of Latest Information. (If someone can think of a better or sexier name, let me know!)

So let's say we have a stock model based on price momentum, and we entered into a long position based on a recent positive return on price. A few minutes later, the price went down instead of up, causing a big loss on our position. If we now ran this momentum model again, very likely it would tell us to short the stock instead because of the recent negative return on price. If we did that, we would be exiting the previously long position and became flat. This is in effect a stop loss, and it follows strictly from adhering to our model and our Principle of Latest Information.

In contrast, suppose we now have a stock model based on mean-reversion, and we entered into a long position based on a recent drop in price. A few minutes later, the price went down further instead of up, again causing a big loss on our position. If we now ran this mean-reversion model again, it would definitely tell us to buy the stock again because of the ever cheaper price. The model would not ask you to exit this position and take a loss. Hence, adhering to the model and the Principle of Latest Information will not lead to a stop loss for a mean-reverting model.

(Now, if we hold this losing long position long enough, the model will incorporate new historical prices into determining its long or short signals as it retrain itself, as the Principle of Latest Information says it should! At that time, it may indeed recommend that we exit the previously held long position at a loss. But this adjustment takes place at a much longer time scale, and therefore cannot really be considered a stop-loss in its usual sense.)

More generally, I find that at every turn, and not only in the realm of stock trading, applying the Principle of Latest Information always help me to be disciplined and not be afraid to enter into new positions, take loss or endure a drawdown as the case may be.

Thursday, June 25, 2009

Alphacet told me that they have a job opening for a quant who will be helping their clients backtest trading strategies, among other responsibilities. Given that Alphacet's clients include several major investment banks and hedge funds, this position should provide pretty good close-up view of how the major quantitative players operate.

Monday, June 15, 2009

Larry Connors and Cesar Alvarez (the guys behind tradingmarkets.com) recently published Short Term Trading Strategies That Work, a nice collection of simple technical trading strategies that you can easily backtest and verify.

As I have argued in my own book, simple strategies are often the ones that work best. As with any published strategies, you may find that their backtest performance may not be as high as advertised if you test them on a different time period or a different security, or with different transaction cost assumptions; but the main value of these strategies is that they serve as an inspiration to trigger your own imagination and motivate you to refine them further.

(For e.g., though the book mainly covers long-only strategies, you can easily imagine the accompanying short strategies.)

To be quite honest, this is one of the few books on trading strategies that I actually manage to finish reading from cover to cover.

As I mentioned before, I now find MATLAB to be a good platform not just for backtesting, but for automated execution as well. Of course, not all brokerages have API's that connect to MATLAB. My example codes are for submitting orders automatically to an Interactive Brokers account.

In general, I find that writing execution programs in MATLAB is a breeze compared to C++, Java or even C#. It takes about 1/5 the development time of a C++ program. Any performance limitations will probably not be due to MATLAB, but to the latency of your brokerage in updating positions and order status.

Thursday, May 07, 2009

I will be holding a 2-day, hands-on, pairs trading workshop in London, October 14-15. It will be held in conjunction with the Automated Trading 2009 conference organized by the Technical Analyst magazine. Please see details here.

Thursday, April 30, 2009

In my book, I mentioned 2 seasonal trades in natural gas and gasoline futures that have been consistently profitable for 14 years. (Mentioned here and here also.) And not only in backtest: I paper-traded them in 2006, and actually traded them in 2007-8, and all 3 years were profitable. How did they fare in this recession year? Quite poorly.

Depending on your exact entry and exit points, the gasoline trade lost about $2,500 per contract of RB. The natural gas trade lost about $7,700 per contract of NG.

You may have heard that natural gas price is at a 6-year low. In fact, we are not seeing any increase in industrial demand for natural gas. Apparently, somebody has forgotten to tell the nation's industrialists that an economic recovery is supposed to be under way.

Will I enter into these seasonal trades again next year? You bet I will.

Sunday, April 19, 2009

As an algorithmic trader, I am constantly in search of a better physical infrastructure where I can connect via the internet to my execution broker at the highest speed and with the least possibility of outage, and at a reasonable cost.

To that end, I would like to mention Fios, a fiber-optics service from Verizon with download speed of 50 Mpbs, upload speed is 20 Mbps, both faster than your typical T-1 line (1.5 Mbps). Furthermore, it costs only $45/month. Hey, even Paul Krugman has installed it at his home!

(I haven't tried it myself, and would like to hear from those of you who have and see if it is time to say goodbye to T-1.)

And as I have reported earlier, I am also constantly looking for a good cloud computing platform so that I can run more strategies without cluttering my office with computers. Finding one will obviate the need for any big investment in internet connectivity at the office.

To that end, I have been trying out Amazon's EC2 for several months. I use it to run one of our strateiges, and I have to report that my experience is mixed.

Firstly, if you are not an IT person, it does take a lot of time (8 person-hours?) to get set up and running, especially with their securities precautions. The learning curve is steep.

Secondly, and more annoyingly, the instances sometimes fail to start properly, or fail to bundle properly. (Bundling means saving the software configuration for future use.) I am using Windows instances. Maybe those who use Linux instances have better experiences?

Thirdly, and most annoyingly, when a new instance is started, Windows often cannot automatically synchronize its clock with time.windows.com or any other internet clock. As a result, the time is often wrong. Now, this may not be a big deal for usual office work. But when your automated trading strategy depends crucially on the time of the day, it can be quite fatal to your profit. If anyone has experienced a similar problem with Window's clock and know a fix, please let me know!

Despite all these hassles, I am still running strategies on EC2, hoping that once EC2 get past the beta release, things will be better.

Sunday, April 12, 2009

In a recent post (hat tip: Russell M.), Tyler Durden at Zero Hedge quoted a quant trader saying that "Anyone who is doing anything sensible right now is either losing money or is out of the market entirely", and that "liquidity deleveraging is approaching (if not already is at) critical levels", and finally the scariest part: "we have crossed into major statistically deviant territory, likely approaching a level that is 6 standard deviation away from the recent norms."

He pointed out that NYSE weekly volume is running about 9% below 52 wk average. But this may not necessarily be the result of deliberate hedge fund deleveraging or increasing risk-aversion by quant traders. From my personal experience, the usual opportunities for mean-reversion have just markedly decreased in the last few months, with much of the cash sitting on the sideline. I believe that quant traders are still ready jump in at any time to provide liquidity should the market demands it. I don't think that the recent market condition portends a 6-sigma event, but if one should occur, it may actually be a great profit opportunity for many short-term mean-reversion traders just as in those past 6-sigma events.

Friday, March 27, 2009

Thoughtful comments from a reader John S. from the UK on his experience with trading technology and models:

"I have been developing my own personal automatic trading systems using Excel VBA and based on rules I have developed over the years as an active private trader investor using both technical and fundamental data analysis.

One of the key merits in adopting an automatic trading system approach that has helped me is to avoid the temptation for manual interference and thereby improving profitability by maintaining consistency. I have found the challenge of developing a successful system very rewarding from a personal perspective as I recognise that there are many that have tried and failed. However one problem I have encountered is my ongoing desire to regularly modify and improve the system which I have found can become counter productive as there is a real danger that system development becomes an end in itself! I just can't seem to stop tinkering as soon as I come up with a new idea or feature!

One advantage of using Excel VBA that I have found is that it is inherently flexible as it facilitates the processing of data which can be important especially when using fundamental data as part of the system. In this respect I recognise that every trader is trying to build in an edge that will make the system more profitable. I have noticed that many traders seem to only focus on price by trying to seek an edge by looking at special indicators or combination of indicators etc. Combining price data analysis with a Factor Model approach is a challenge which is ideally suited Excel VBA as it can be easily used to process both fundamental and macroeconomic data into a form that can be integrated with price data analysis.

I recognise from your book that Matlab is more powerful than Excel VBA and may be just as flexible in integrating fundamental and macroeconomic data but I just wanted to draw your attention to benefits I have found using Excel VBA which may suit those who like myself are more comfortable in using Excel VBA and are reluctant to change. Other features that can be exploited that I have found helpful when back testing are automatically producing Price Charts that incorporate Entry and Exit points which provides visual reassurance that the system is working as intended as well as generating automatic Word reports recording key output for future reference.

Friday, March 13, 2009

As I mentioned in various previous blog posts, (e.g. see here), I believe mean-reversion strategies have been performing very well in the last year. Now here is an article (hat tip: Laurence) that provides concrete analysis to support this hypothesis. In fact, the author points out that most of the mean-reversion in recent years comes from the overnight close-to-open reversal.

Thursday, February 26, 2009

Here is a new low-cost service called Alerts4All that offers technical trading signals for retail investors. You can, for example, have an alert sent to you every time a "Double bottom" pattern occurs.

A much more advanced version of the service will be rolled out soon -- I saw a demo today where you can backtest your strategies online, combining different fundamental and/or technical variables as entry or exit signals. They also have some built-in models for you to adapt (e.g. a model based on The Little Book that Beats the Market by Joel Greenblatt.) More interestingly, you can look at other people's trading models and their historical and/or real-time performance.

Matlab or Alphacet it is not, but I think it will be quite useful for many retail traders. It might even be useful to professional traders who want a quick-and-dirty way to test out ideas.

Sunday, February 22, 2009

U.S. Congressman Peter DeFazio, introduced H.R. 1068: “Let Wall Street Pay for Wall Street's Bailout Act of 2009”, which aims to impose a 0.25% transaction tax on the “sale and purchase of financial instruments such as stock, options, and futures.

Ladies and gentlemen, 0.25% is 50 basis points round-trip. Few if any statistical arbitrage strategies can survive this transaction tax.

And no, this is not "Wall Street paying for Wall Street's Bailout". This is small-time independent trader-entrepreneur like ourselves paying for Wall Street's Bailout.

Furthermore, this tax will drain the US market of liquidity, and ultimately will cost every investor, long or short term, a far greater transaction cost than 0.25%.

Wednesday, February 18, 2009

A seasonal spread is a spread which follows a regular pattern from year to year, such as generally falling in the Spring or generally rising in October. To find seasonal spreads, I've been using ANOVA, which stands for analysis of variance. ANOVA is a well-established statistical technique which, given several groups of data, will determine if the groups have different averages. Importantly, it determines if the differences are statistically significantly.

I start with several years of spread data, compute the spread's daily changes, then group the daily changes by their calendar month, giving me 12 groups. The ANOVA analysis tells me if the groups (months) have significantly different averages. If so, I know the spread is seasonal since it is consistently up in certain months and consistently down in others.

The beauty is that I can automate the process, scanning my entire database for seasonal spreads. A recent scan identified the spread between crude oil (CL) and gasoline (RB), for example. The initial ANOVA analysis indicated the CL/RB spread is very likely to be seasonal. This bar chart of each month's average daily change demonstrates the seasonality. (Click on the graph to enlarge it.)

The lines show the confidence interval for each month's average. Notice how May and June are definitely "up" months because their confidence interval is entirely positive (above the axis). Likewise, November and December are definitely "down" months. For all other months, we cannot be certain because the confidence interval crosses zero, so the true average change could be either negative or positive. The conclusion: Be long the spread during May and June; be short during November and December.

For more details, please see my on-line paper regarding ANOVA and seasonal spreads.

Thursday, February 12, 2009

Just as one should not trust VaR completely, one should also beware of high Sharpe ratio strategies. As this Economist article pointed out, a strategy may have a high Sharpe ratio because it has so far been accumulating small gains quite consistently, but it could still be subject to a large loss when black-swan events strike.

Personally, I am more comfortable with strategies that do the opposite: those that seldom generate any returns, but always earn a large profit when financial catastrophes occur.

Friday, February 06, 2009

This Quebec pension fund lost some $25 billion due to non-bank asset-backed commercial paper (ABCP). Their Value-at-Risk (VaR) model did not take into account liquidity risk. As usual, the quants got the blame. But can someone tell me a better way to value risk than to run historical simulations? Can we really build risk models on disasters we have not seen before and cannot imagine will happen?

Sunday, February 01, 2009

"I am more than half way through your book and am stuck at a concept that I can't seem to find an answer in any other forum.

I have read Ralph Vince's "Portfolio Management Formulas," which uses Kelly's formula to calculate an optimal "fraction" of the bankroll to bet on each trial. So a trader can calculate a fraction of his total trading account value to risk on each trade. What I am referring to is the so-called "fixed-fractional" trading. There exists an optimal fraction that will maximize the geometric growth rate of the trading equity, in theory anyway.

However, in the money management chapter of your book, you use Kelly's formula to derive an optimal "leverage." This seems to be in conflict with what I learned from Ralph Vince, since leverage is usually great than unity and fraction is usually less than unity. I can't seem to make a connection between these two concepts. I have also seen the same optimal leverage formula in Lars Kestner's Quantitative Trading Strategies and asked the same question on some forums, but no one was able to give me a clear satisfactory answer. It would be greatly helpful if you can help me sort out the confusion."

A:

I don't have Ralph Vince's book with me, but if I recall correctly, his formulation is based on discrete bets (win or lose, no intermediate outcome), much like horse-betting or in a casino game. My approach, or rather, Professor Ed Thorp's approach, is based on continuous finance, assuming that every second, your P&L could fluctuatate in a Gaussian ("log-normal") fashion.

For discrete bets where you could have lost all of your equity in one bet, surely one should only bet a fraction of your total equity. For continuous finance, there is very little chance one could have lost all of the equity in one time period, due to the assumed log-normal distribution of prices. Hence one should bet more than your equity, i.e. use leverage.

Q:

In example 6.2 in your book, the portfolio consists of only long SPY, which has little chance of going to zero. So I can see how it is reasonable that you use the continuous finance approach and apply the optimal leverage to scale up the return.

But let's assume that the portfolio consists of a single strategy that buys options. Suppose this strategy will lose most of the time due to time decay but will make profit once in a while due to black-swan events. I don't think it's a good idea to bet the entire portfolio equity on each trade for this strategy. Can you still apply the continuous finance approach in this case, since in reality trading is like making discreet bets? Should we expect the mean and variance of this strategy automatically result in an Optimal Leverage that is less than one? So that we actually need to risk a fraction of the account equity per trade?

A:

The formula I depicted in the book is valid only if the P&L distributions are Gaussian. If one expects a fat-tailed distribution due to black-swan events, a different mathematical model needs to be used, though it can still be within the continuous finance framework. However, for simplicity's sake, if the distribution looks multinomial (e.g. high probability of "Win a lot" v "Lose a lot"), then you may model it with fractional betting just like a casino game.

Friday, January 16, 2009

Lately a number of new (at least to me) technologies useful to the algorithmic trader came to my attention:

1) Matlab2IB API

I said in my book that it is difficult to use Matlab as an execution platform. As Max has pointed out, this is no longer true. This inexpensive API connects Matlab to your Interactive Brokers' account. It allows you to retrieve historical data, get real-time quotes, and send orders. In other words, all the basic functions you need to create your own execution engine.

2) R

Many people (hat tip: Steve H.) know that R is an open-source (i.e. free) alternative to Matlab. I find that there is also an API that connects R to Interactive Brokers, though I have not tried it myself.

Running out of PC's to run your myriad strategies? Try Amazon's EC2 cloud computing platform. For a modest hourly fee, you get access to an instance of either Linux or Windows environment, and you can add as many instances as you want. The connection speed is supposed to be at least 10x T-1 line, well-suited to high frequency traders . Here is some other performance benchmarks.

Monday, January 12, 2009

See this interesting article (registration required) on FT on the state of the hedge fund industry. Paul Tudor Jones, Citadel, and Fortress Investment Group are all said to be moving to "easy-to-understand liquid strategies", otherwise known as "statistical arbitrage".

Friday, January 09, 2009

Felix Salmon claimed in this post (hat tip: J. Rigg) that the quant job market is alive and well. However, I haven't heard much from the usually diligent headhunters in the last few months, which doesn't bode well. Maybe some of our readers can comment on the current state of the quant job market?

In that same post, Felix wondered whether to incorporate the extraordinary period of 2008 as part of backtesting data. Actually, I don't see much of a problem here -- of course one should include 2008. The only reason a trading model would have performed poorly in 2008, as opposed to 2006, 2007 or 2009, would be that its parameters are fitted too tightly to historical data. If you try out some parameterless trading models like I advocated, 2008 is not that unusual except for its higher volatility.