New converged network blends Ethernet, Infiniband

SAN JOSE, Calif.  The race toward converged networks took a step forward Monday (April 19) as a trade group announced a capability for layering Infiniband's low latency features on top of Ethernet. The so-called Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE, pronounced Rocky) is geared for high performance computing applications, typically running on large clusters.

Infiniband and Ethernet chip and card vendors could announce 10 Gbit/second RoCE products before the end of April, according to the Infiniband Trade Association which developed the technology.

The RoCE products could offer lower latency, price and power consumption that competing cards and chips based on advanced versions of Ethernet. However, the will not support Internet Protocol, making them unsuitable for use in majority of computer networks that require IP routing.

For several years, engineers have been consolidating features from multiple networks as a way to simplify the management and lower the costs of business networking. In 2008, they paved the way for merged storage and networking by creating the Fibre Channel over Ethernet specification. RoCE extends the consolidation to include interprocessor communications used in large computer clusters for a range of database, financial, simulation and scientific uses.

Specifically RoCE layers Infiniband data and transport features (layers three and four respectively of the OSI stack) on top of the physical and media access control layers (layers 1 and 2) of Ethernet. Infiniband was one of the first networks to implement RDMA techniques and as such its upper layers are relatively simple and mature compared to Ethernet which uses a set of relatively newer overlapping protocols.

Mellanox, one of the primary producers of Infiniband chips, will implement the Infiniband layers in hardware to deliver latencies as low as 1.3 microseconds. Many Ethernet chip and card makers are expected to implement the Infiniband features in software running on a host processor and thus deliver latencies closer to those of today's advanced Ethernet products with latencies of seven to 10 microseconds.

The lower latencies can deliver up to ten-fold performance boost in Oracle databases optimized for Infiniband. The low latencies are required for data-intensive applications such as simulations running in a computer cluster.

Other apps, such as the batch rendering of a large animated movie, can be handled using systems with the higher latency figures. Ethernet vendors who may adopt software versions of RoCE include Broadcom, Emulex, Intel and QLogic.

"Think of [RoCE] as a way to bring Infiniband applications which are predominantly based on clusters on to a common Ethernet converged fabric," said Michael Krause, an I/O specialist at Hewlett-Packard who co-wrote a 2009 paper on RoCE. "Nearly all of the [existing networking] software should come across without modification with the exception of the Infiniband management code which is not applicable to an Ethernet infrastructure," he added.

The software compatibility is due in part to the broad use of so-called Open Fabrics middleware, open source code that supports RoCE in its version 1.2.1. HP is taking an agnostic stance on the variety of merged Ethernet, Infiniband and Fibre Chanel network technologies emerging, supporting whatever flavors users request.

"While RoCE may be seen by some as a threat, others will see it as validating an Ethernet-centric interconnect strategy as well as reinforcing the merits of RDMA technology which has been seen as the main strength of Infiniband and [advanced Ethernet] to date," Krause said.

The capability for layering Infiniband's low latency features on top of Ethernet is definitely an important advancement for [URL=http://www.3gcgroup.com/index.php/services/converged-network-preparation]converged network[/url] technologies. Thanks for this info!