IDT Collaborates with Cavium to Support Hyperscale Data Centers

SAN JOSE, Calif., June 20, 2016 - Integrated Device Technology, Inc. (IDT) (NASDAQ: IDTI) today announced its collaboration with Cavium, Inc., on a reference design that supports the burgeoning workloads of the hyperscale data center. The IDT® DDR4 memory interface solutions are incorporated into the reference design, which is built on Cavium's ThunderX family of workload-optimized 64-bit ARMv8-based processors.

The Cavium ThunderX® reference design can be used to build energy-efficient server solutions featuring IDT's DDR4 technology. Cavium and its industry partners developed the design to target server solutions for members of the Open Compute Project (OCP) and other hyperscale data center customers.

"IDT's DDR4 technology offering and the company's leadership in the OCP High Performance Computing Project made them the obvious choice to team with," said Rishi Chugh, director, Data Center Processor at Cavium, which similarly has a long history collaborating in the Open Compute community. "Our relationship with IDT enables us to deliver workload-optimized, flexible, scalable and efficient ARM-based server solutions to the data center market, including the growing community of the OCP."

The ThunderX product family is Cavium's server processor for next-generation data center and cloud applications, featuring high-performance custom cores, single- and dual-socket configurations, high memory bandwidth and large memory capacity. The product family also includes integrated hardware accelerators, integrated feature-rich high bandwidth network and storage IO, fully virtualized core and IO, and scalable high bandwidth, low latency Ethernet fabric.

"IDT is excited to work with partners like Cavium and industry bodies like the Open Compute Project to demonstrate the compelling value of our DDR4 chipset in hyperscale data centers," said Rami Sethi, vice president and general manager of memory interface products at IDT. "Our solution uniquely enables users to scale memory capacity without compromising data rates in order to maximize workload performance and efficiency."