IBM flash interconnect aims to bust bottlenecks

SANTA CLARA, Calif. – IBM is developing a new storage I/O technology geared for flash in the wake of its acquisition of Texas Memory Systems. Big Blue is expected to court an ecosystem of third parties to support the technology which may take the form of a value-added layer on top of an existing interconnect standard.

A deluge of data will drive requirements to serve tens and even hundreds of millions of I/O operations/second. Today’s interconnects cannot meet that demand, said Andy Walls, IBM's chief architect for storage hardware, in a keynote at the Server Design Summit here.

“The current I/O model is not sufficient because it was developed for disk drives and is not necessarily best for flash,” said Walls. “You want to scrutinize huge volumes of data and determine in almost real time what to do with it."

Today’s storage interconnects can be limited to as little as a few 100,000 IOPs. They are generally geared for the kind of path lengths, data layouts and single-core support disk drives want.

A next-generation I/O for flash subsystems will need to support multicore processors, be more open to extended reads and have better support for clusters, Walls said. The technology is in development inside IBM, but the company has not yet decided how to bring it to market, Walls told EE Times in an interview.

Texas Memory Systems use Infiniband and Fibre Channel for network links and PCI Express for direct-attached configurations. Walls noted a variety of standard and proprietary interconnects in the works including variants of PCI Express and SCSI.

Users generate as much as 2.5 exabytes of data a day, such that 90 percent of today’s data was generated in the last two years. “It’s an astounding growth," Walls said.