Serial-Interface Memory Architectures for 100 GbE

From the need for packet processing to 100GbE line rates and the move to IPv6, the traditional memory approaches in networking are being squeezed. What's the best way forward?

There are many forces at play making the development of 100GbE and 400GbE systems challenging. From the need for packet processing to 100GbE line rates and the move to IPv6, the traditional memory approaches in networking are being squeezed. In an article on EDN's Wireless/Networking Design Center (a sister site), Michael Sporer of MoSys makes the case for serial-interface memory architectures. He takes a detailed look at the high-efficiency GigaChip Interface and how to address the requirements of next-generation equipment with an intelligent memory architecture. To support his approach, Sporer uses the MoSys 3rd Generation Bandwidth Engine IC as an example.

Here’s a clip from the piece:

The emergence of 100 Gbps line rates plus the transition to IPv6 requires next-generation equipment that delivers many times the bandwidth in the same form factor. Designers must also consider network management and quality of service (QoS) requirements [1], such as absolute delay, delay jitter, minimum delivered bandwidth, and packet loss, which are used to monitor and manage networks and are the basis of contractual service level agreements in carrier-class networks.

In order to handle the Internet of Things (IoT), the number of network addresses is going to expand dramatically. IPv6 enables this with its 128 bit long address field. There is also a shift towards software defined networking (SDN) and network function virtualization (NFV) in order to make deployment of generic hardware more flexible, fungible, and efficient. In this environment, the basic tasks of network address lookup, flow statistics, and atomic thread management become throttled by memory bandwidth such that embedded and standalone CPUs cannot keep up with network traffic demands. This drives a need to accelerate base memory functions that are unique to networking, thus building architectural pressure for intelligent memory offload [2].