Monthly Archives: May 2013

On Friday, 20th April 2012, two mysterious events occurred on the National Stock Exchange (NSE). In the morning, Infosys futures crashed over 20% and quickly recovered back to the original level. In the afternoon, just before 2:30pm, Nifty futures crashed 6.7% from the 5,350 level back down to 5,000, and then nearly instantly recovered back to 5200. Both crashes were blamed on algorithmic trading.

Program trading has been blamed for “flash crashes” for nearly 25 years. In October 1987, US markets took a nose dive on a single day and for years, the blame game went on with the primary suspect being program trades. This is understandable. Since a computer can trade with much faster speed than a human, it can set off a spiralling price change by continuously buying or selling with no real control. Why then, should we even allow algorithmic trading?

Program trading can provide for great trading opportunities with less human error. Much of the arbitrage that used to happen in Indian markets was manual. To bridge any potential difference in prices between the NSE and BSE (the two largest stock exchanges in India), arbitrageurs would use two computers, manually entering a buy order on one and a sell order on another. The speed of the operator was his biggest skill, so the dealing room would resound with a cacophony of keyboards when an opportunity arose. The problem? A human can only look at so many opportunities, so many price differences remain. A slight error on that keyboard (F2 instead of F1) can result in a large loss. The operators cost money in terms of computers, real estate and benefits. You could eliminate much of these by using a computer to do exactly the same thing.

Program trading can curtail broker front-running and impact costs. Often, when a fund would have to take large positions, their brokers would put their own buy orders earlier, so that the large purchase from the fund would give them a great profit. In India, much of the volume is made up of the top 100 stocks. After that, stocks trade less than 20 cr. a day. For a mutual fund to buy about 25 cr. ($5 million) in a lesser known stock, its size will immediately drive up the price and a broker is quite likely to front-run their purchase. Using an algorithm instead will allow the purchaser to spread their purchase over several days and several brokers, hunting for volume slowly over time.

Algorithms can also provide liquidity where there isn’t otherwise any. For you to purchase a stock, there needs to be a seller in place. Many stocks don’t have the kind of interest from either investors or traders. Of the near-1,500 stocks traded on the NSE, more than 1,000 trade less than 50 lakh (Rs. 5 million) a day. The spread between the buy and sell prices on the exchange may be too wide; a typical market maker provides the liquidity that allows you to buy or sell at a reasonable cost. Market making operations used to be manual earlier; they are now run through algorithms.

Finally, large orders (greater than a few crores in value) are usually blocked by stock exchanges, assuming there has been a fat finger trade or a mistaken entry. Yet, large deals must take place when they must — if an investor decides to exit a large holding in a stock, they might use an algorithm to send in orders in allowable chunks.

Algorithmic trading, however, comes with its own set of problems. A rogue program can place orders continuously and take the entire system down. During Mahurat Trading in October 2011, such a program created a ruckus in the BSE, so much that the exchange canceled all trades made on that day to avoid a payment crisis.

Even if an algorithm splits large orders into parts that stock exchanges let through, the input itself may be faulty (an extra zero for instance) which means the algorithm does exactly what the order limit was designed to restrict: the fat finger trade. This, they say, is what happened with the Infosys order on Friday, 20th April 2012.

With such large orders, stop-losses can get triggered, creating another spiral. Some traders place a protective stop (in simple terms: “Sell-If-The-Price-Falls-To-X”) way below market prices; such stops get taken out when such large orders come by and those investors that sell feel disappointed when the market rebounds immediately. But that will happen even with large “manual” orders; algo trading is only a convenient scapegoat.

That steep falls are only engineered by automated trading is also suspect. The market has “circuit” limits which shut down trading when the index moves over 10%. After the 2009 elections, when markets moved up 10% in a short while, algorithmic trading wasn’t blamed; neither was it when markets crashed 10% in October 2009. The feeling was that the moves were “justified” since there was news behind it (The election results and a Lehman bankruptcy impact respectively). Since there was no “reason” on Friday, computers must have been to blame.

This is just witch-hunting. Computers do exactly as they have been programmed to do, and there will be large errors if they aren’t monitored properly (as manual traders must be). The regulators and the exchange must investigate each such case, and indeed, it has turned out it was more the human input that caused the error. Every rogue trader or trading program must be found and punished. Surveillance needs to get much more sophisticated to detect misbehaving automated trades. Algorithms already use a different code when entering trades; a series of checks can be run whenever required to see if any rules were violated. Some of this cost needs to be borne by the algo-trading community, by fees like a per-transaction or per-order fee payable to SEBI.

But we can’t go around demanding bans on algorithmic trading just because of a flash crash. Knee jerk reactions like that will hurt legitimate players or put them at the mercy of their brokers, and that is plain wrong.