Gen-Z Consortium Puts New High Performance Interconnect in Motion

By Doug Black

October 12, 2016

Industry powerhouses have joined forces to address an issue that has confounded system architects since the advent of multicore computing, one that has gained in urgency with the rising tide of big data: the need to bring balance between processing power and data access. The Gen-Z Consortium has set out to create an open, high-performance semantic fabric interconnect and protocol that scales from the node to the rack.

Gen-Z brings together 19 companies (ARM, Cray, Dell EMC, HPE, IBM, Mellanox, Micron, Seagate, Xilinx and others) and bills itself as a transparent, non-proprietary standards body that will develop a “flexible, high-performance memory semantic fabric (providing) a peer-to-peer interconnect that easily accesses large volumes of data while lowering costs and avoiding today’s bottlenecks.” The not-for-profit organization said it will operate like other open source entities and will make the Gen-Z standard free of charge.

Vowing to enable Gen-Z systems in 2018, the consortium’s mission is to address what it says are obsolete “programmatic and architectural assumptions”: that storage is slow, persistent, and reliable while data in memory is fast but volatile, assumptions the consortium contends are no longer optimal in the face of new storage-class memory technologies, which converge storage and memory attributes. A new approach to data access that takes on the challenges of explosive data growth, real-time application requirements, the emergence of low-latency storage class memory and demand for rack scale resource pools – these are the consortium’s objectives.

Kurtis Bowman, director, server solutions, office of the CTO, at consortium member Dell EMC, said that 12 of the member companies have worked for the past year to develop what he called a “.7- or .8-level spec” on the fabric, “so there’s still opportunity for new members to contribute to the spec, make it stronger,” but enough work has been done “with the spec in proving out that the technology itself is right.”

“We get asked a lot, ‘Why the new bus?’” he said. “It’s because there’s really nothing that today solves all the problems that we think exist. One is that memory is flat or shrinking in the servers that we have today. So the bandwidth per core is shrinking to a point where today we have less bandwidth per core than we did in 2003. The memory capacity per core is shrinking, the I/O per core is shrinking. It really comes down to there’s just not enough pins on the processor to be able to get the requisite amount of memory and I/O that you need.”

He emphasized the need to solve this challenge as real-time workloads are increasingly adopted, “You have to be able to quickly analyze the data coming in, get some insights from that data, because as it takes longer to analyze that data, your time to insights pushes out and makes it less valuable. So we want to make it so it’s easier to get compute and data closer together and allow those to be done” in a standardized way, across CPUs, GPUs, FPGAs and other architectures. “All of them need access to the memory that’s available.”

Gen-Z touts the following benefits:

High bandwidth and low latency via a simplified interface based on memory semantics, scalable to 112GT/s and beyond with DRAM-class latencies.

Support for advanced workloads by enabling data-centric computing with scalable memory pools and resources for real time analytics and in-memory applications.

Software compatibility with no required changes to the operating system while scaling from simple, low cost connectivity to highly capable, rack scale interconnect.

Gartner Group’s Chirag Dekate, research director, HPC, servers, emerging technologies, said the consortium’s focus on data movement has important implications on high-growth segments of the advanced scale computing market, such as data analytics and machine learning, that utilize coprocessors and accelerators.

“These technologies are crucial in delivering the much needed computational boost for the underlying applications,” Dekate said. “These architectures are biased towards extreme compute capability. However, this results in I/O bottlenecks across the stack.”

He said coprocessors and accelerators utilize the PCIe bus to synchronize host and device memories, despite there being roughly three orders of magnitude difference between the FLOPS-rate and the bandwidth of the underlying PCIe bus. “This essentially translates to dramatic inefficiencies in performance, especially in instances where there isn’t sufficient parallelism to hide the data access latencies,” said Dekate. “This problem is only going to get worse as the computational capabilities of core architectures evolve more rapidly than the supporting memory subsystems, resulting in a fundamental mismatch between data movement within a compute node and the floating point rate of modern processors.”

Initiatives like Gen-Z are crucial for addressing the data movement challenges that emerging compute platforms are facing, he said. “The success of Gen-Z will depend on the consortium’s ability to expand and integrate broader scale of processor vendors to be able to have the broadest impact in customer datacenters.”

Gen-Z said it expects to have the core specification, covering the architecture and protocol, finalized in late 2016. Proof systems developed on FPGAs will follow with fully Gen-Z enabled systems on track for mid-2018. Other consortium members include AMD, Cavium Inc., Huawei, IDT, Lenovo, Microsemi, Red Hat, SK Hynix and Western Digital.

Seeking to reign in the tediousness of manual software testing, Pfizer HPC Engineer Shahzeb Siddiqui is developing an open source software tool called buildtest, aimed at automating software stack testing by providing the community with a central repository of tests for common HPC apps and the ability to automate execution of testing. Read more…

By Tiffany Trader

In just a few months time, Senegal will be operating the second largest HPC system in sub-Saharan Africa. The Minister of Higher Education, Research and Innovation Mary Teuw Niane made the announcement on Monday (Jan. 14 Read more…

By Tiffany Trader

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three big public cloud vendors has by turn touted the latest and Read more…

Previous:

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchmark or suite of benchmarking tools to compare the performanc Read more…

By James Reinders

Quantum computing has lived so long in the future it’s taken on a futuristic life of its own, with a Gartner-style hype cycle that includes triggers of innovation, inflated expectations and – though a useful quantum system is still years away – anticipatory troughs of disillusionment. Read more…

By John Russell

Anyone who has checked a forecast to decide whether or not to pack an umbrella knows that weather prediction can be a mercurial endeavor. It is a Herculean task: the constant modeling of incredibly complex systems to a high degree of accuracy at a local level within very short spans of time. Read more…

By John Russell

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By John Russell

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…