“InfiniBand adoption continues at a very rapid rate. The introduction of the InfiniHost III Ex complements our existing 3rd generation switching device, the InfiniScale III,” said Eyal Waldman, chairman and CEO of Mellanox Technologies, LTD. “The new InfiniHost adapter features advanced capabilities that have been developed based upon extensive feedback from customers that manage data centers and high performance clusters. These improvements advance the performance, maturity, scalability and reliability of applications, like Oracle and DB2, running on InfiniBand clustered servers and storage.”

InfiniHost and 8X PCI Express server platforms enable an aggregate of 20 Gb/sec of I/O bandwidth. PCI Express greatly improves upon the 8Gb/sec bandwidth that PCI-X slots provide and at the same time improves latencies. Bandwidth and latency are critical attributes for clustered database, High Performance Computing (HPC), embedded, and storage applications that take advantage of InfiniBand’s Remote Direct Memory Access (RDMA) capabilities.

“PCI Express and InfiniBand are a perfect marriage of technologies for data center computing. Combined they enable an unprecedented 20Gb/sec bandwidth from Intel® Architecture-based processors to server and storage systems on an InfiniBand fabric,” said Jim Pappas, director of initiative marketing for Intel’s Enterprise Platform Group. “We have been working closely with Mellanox on their design and verification of PCI Express-enabled InfiniHost III Ex products to help ensure interoperability with Intel Architecture-based server chipsets and platforms.”

The InfiniHost III Ex is the 3rd Generation InfiniBand HCA from Mellanox that offers full backwards software compatibility with the existing InfiniHost HCA, thus allowing the existing software base, including tools, protocols and applications to run over the InfiniHost III Ex unchanged. Feature enhancements include improved caching, pre-fetching, enhanced I/O operations, additional CPU off-load capabilities and other features that transparently improve application speed.

The InfiniHost III Ex is ideal for blade designs. The device features a small 27 x 27 mm package, and a single chip design including the integration of the SERDES physical layer. Utilizing TSMC’s state of the art 130 nanometer silicon fabrication process allows the core to run at a higher clock rate and achieve a low power requirement of only ~5 Watts.

"As a systems leader and an active participant in the InfiniBand Trade Association, Sun works with best-in-class InfiniBand technology providers such as Mellanox to develop InfiniBand-based hardware and software,” said Subodh Bapat, vice president and chief technologist, Volume Systems Products, Sun Microsystems, Inc. “With its improved support for bandwidth and lower latencies, the InfiniBand architecture and PCI Express have the potential to offer tremendous performance benefits to InfiniBand-based blade servers."

The InfiniHost III Ex HCA PCI Express add-in card expands the Mellanox family of HCA adapter cards. The card is a full 8X PCI Express card providing 20 Gb/sec of raw bandwidth compatible with the latest PCI Express 1.0a specification, including the ability to auto-negotiate. The card features various memory configurations including 128MB, 256MB and 512MB.

Avaliability
The InfiniHost III Ex is currently available to OEM partners and is priced at $231 in 10K quantities. The InfiniHost III Ex low profile PCI Express card is currently sampling to OEM partners. General availability of both products through our OEM partners is expected in Q2, 2004.

About InfiniBand
InfiniBand is the only 10 Gb/sec ultra low latency clustering, communication, storage and embedded interconnect in the market today. InfiniBand, based on an industry standard, provides the most robust data center interconnect solution available with reliability, availability, serviceability and manageability features designed from the ground up. These parameters greatly reduce total cost of ownership for the data center. Low cost InfiniBand silicon that supports 10 Gb/sec RDMA transfers is shipping today providing 25 times the bandwidth of Ethernet and three to eight times the bandwidth of proprietary clustering interconnects. With 30 Gb/sec products currently shipping, InfiniBand is at least a generation ahead of competing fabric technologies today and in the foreseeable future.

About Mellanox
Mellanox is the leading supplier of InfiniBand semiconductors, providing complete solutions including switches, host channel adapters, and target channel adapters to the server, communications, data storage, and embedded markets. Mellanox Technologies has delivered more than 100,000 InfiniBand ports over two generations of 10 Gb/sec InfiniBand devices including the InfiniBridge, InfiniScale, InfiniHost and InfiniScale III devices. Mellanox InfiniBand interconnect solutions today provide over eight times the performance of Ethernet, and over three times the performance of proprietary interconnects. The company has strong backing from corporate investors including Dell, IBM, Intel Capital, Quanta Computers, Sun Microsystems, and Vitesse as well as, strong venture backing from Bessemer Venture Partners, Raza Venture Management, Sequoia Capital, US Venture Partners, and others. The company has major offices located in Santa Clara, CA, Yokneam and Tel Aviv Israel. For more information on Mellanox, visit www.mellanox.com.

Mellanox is a registered trademark of Mellanox Technologies and InfiniBridge, InfiniHost and InfiniScale are trademarks of Mellanox. InfiniBand (TM/SM) is a registered trademark and service mark of the InfiniBand Trade Association.