"Today's servers integrate multiple dual and quad-core processors with high bandwidthmemory subsystems, yet the I/O limitations of Gigabit Ethernet and Fibre Channel effectively degrades the system's overall performance," said Eyal Waldman, chairman, president and CEO of Mellanox Technologies, in a statement.

The ConnectX IB fourth-generation InfiniBand HCAs are intended to resolve those issues with the following features. They are capable of providing 1-µs RDMA write latency and 1.2-µs MPI ping latency, and a uni-directional MPI message rate of 25 million messages-per-second. The InfiniBand ports connect to the host processor through a PCI Express x8 interface.

In addition, they extend network processing offload and optimize traffic and fabric management. New capabilities include hardware reliable multicast, enhanced atomic operations, hardware-based congestion control and granular quality of service.