The CTO of an equity trading firm, who agreed to talk with HPCwire‘s sister pub EnterpriseTech anonymously, said his company has been a Solarflare customer for four years and that its IT department has validated Solarflare’s claims for TCPDirect of 20-30 nanoseconds latency.

Financial traders are in a race to make transactions ever faster. In today’s high-tech exchanges, firms can execute more than 100,000 trades in a second for a single customer. This summer, London and New York’s financial centres will become able to communicate 2.6 milliseconds (about 10%) faster after the opening of a transatlantic fibre-optic line dubbed the Hibernia Express, costing US$300 million. As technology advances, trading speed is increasingly limited only by fundamental physics, and the ultimate barrier — the speed of light.

Through glass optical fibres, information travels at two-thirds of the speed of light in a vacuum (300,000 kilometres per second). To go faster, data must travel through the air. Next up may be hollow-core fibre cables, through which light would travel in a tiny air gap at light speed.

High-frequency trading relies on fast computers, algorithms for deciding what and when to buy or sell, and live feeds of financial data from exchanges. Every microsecond of advantage counts. Faster data links between exchanges minimize the time it takes to make a trade; firms fight over whose computer can be placed closest; traders jockey to sit closer to the pipe. It all costs money — renting fast links costs around $10,000 per month.

Solarflare is regarded as a partner that allows high frequency trading firms to focus on core competencies, rather than devoting in-house time and resources to lowering latency.

“It used to be the case that there weren’t a lot of commercial, off-the-shelf products applicable to this space,” he said. “If one of our competitors wanted to do something like this for competitive advantage, Solarflare can do it better, faster, cheaper, so they’re basically disincentivized from doing so. In a sense this is leveling the playing field in our industry, and we like that because we want to do what we’re good at, rather than spending our time working on hardware. We’re pleased when external vendors provide state-of-the-art technology that we can leverage.”

The latency through any TCP/IP stack, even written to be low-latency, is a function of the number of processor and memory operations that must be performed between the application sending/receiving and the network adapter serving it. According to Ahmet Houssein, Solarflare VP/marketing and strategic development, TCP/IP’s feature-richness and complexity means implementation trade-offs must be made between scalability, feature support and latency. Independently of the stack implementation, going via the kernel imposes system calls, context switches and, in most cases, interrupts that increase latency.