Wiwynn XC200G2 PCIe Gen4 16x Compute Accelerator Machine

The Wiwynn XC200G2 is a PCIe Gen4 accelerator chassis that holds up to sixteen acceleration devices such as GPUs, Intel Nervana NNPs, or FPGAs. Wiwynn has made its mark in the hyperscale and OCP markets, but it is pushing the WiRack 19 product line more heavily lately. The WiRack 19, which the Wiwynn XC200G2 is designed for more traditional 19-inch racks. With the insatiable need for more compute acceleration, from GPUs, FPGAs, and new deep learning accelerators, there is an arms race to maximize the number of accelerators in a single machine. These accelerators all need low latency and bandwidth, which makes PCIe 4.0 switching a better alternative.

Since the Wiwynn XC200G2 does not have a server node internally in its default configuration, it connects to external server nodes. Connection options range from a single to up to four connected head nodes that can utilize the accelerators found in the XC200G2.

In its standard configuration, the Wiwynn XC200G2 houses two drawers of 8x PCIe double-width devices. There is an optional configuration that brings one or two server nodes into the chassis replacing an 8x accelerator drawer.

Wiwynn XC200G2

For those doing machine learning and AI, PCIe Gen4 is a big deal. It means that a single PCIe 4.0 x16 link to a CPU has around twice as much bandwidth and lower latency than the previous generation. For compute accelerators that support PCIe 4.0, they have access to additional inter-accelerator bandwidth and lower latency. We have seen companies like NVIDIA address the same performance bottleneck with technology like NVLink.

One of the hot new technologies on the horizon is PCIe Gen4 (or PCIe 4.0.) The spec is being adopted by architectures such as IBM Power9, but it is still some time away for the x86 community. PCIe Gen4 took so long to come out that PCIe Gen5 is close to being adopted. That the industry is largely looking past to PCIe Gen4 in anticipation of Gen5.

1 COMMENT

Cliff: it would be great if you could poll the x86 community and vendors, to see if they are giving serious consideration to skipping PCIe 4.0 and jumping directly to PCIe 5.0. The rate at which the NVMe ecosystem is evolving seems to justify a big increase in MAX HEADROOM (upstread bandwidth) — a realization that became widespread when Prosumers hit the limits imposed by Intel’s DMI 3.0 link.