In this video from SC18 in Dallas, Marc Hamilton from NVIDIA describe the all new overclocked DGX-2H supercomputer. Built by NVIDIA, a cluster of 36 DGX-2H devices with 3 Petaflops of LINPACK performance was just ranked #62 on the TOP500 list of the world’s fastest supercomputers.

In this podcast, the Radio Free HPC team looks back on the highlights of SC18 and the newest TOP500 list of the world’s fastest supercomputers.

Buddy Bland shows off Summit, the world’s fastest supercomputer at ORNL. “The latest TOP500 list of the world’s fastest supercomputers is out, a remarkable ranking that shows five Department of Energy supercomputers in the top 10, with the first two captured by Summit at Oak Ridge and Sierra at Livermore. With the number one and number two systems on the planet, the “Rebel Alliance” vendors of IBM, Mellanox, and NVIDIA stand far and tall above the others.”

Two months after its introduction, the NVIDIA T4 GPU is featured in 57 separate server designs from the world’s leading computer makers. It is also available in the cloud, with the first availability of the T4 for Google Cloud Platform customers. “Just 60 days after the T4’s launch, it’s now available in the cloud and is supported by a worldwide network of server makers. The T4 gives today’s public and private clouds the performance and efficiency needed for compute-intensive workloads at scale.”

“With Bitfusion along with Mellanox and VMWare, IT can now offer an ability to mix bare metal and virtual machine environments, such that GPUs in any configuration can be attached to any virtual machine in the organization, enabling easy access of GPUs to everyone in the organization,” said Subbu Rama, co-founder and chief product officer, Bitfusion. “IT can now pool together resources and offer an elastic GPU as a service to their organizations.”

In this video from SC18 in Dallas, Thor Sewell from Intel describes the company’s pending Cascade Lake Advanced Performance chip. “This next-gen platform doubles the cores per socket from an Intel system by joining a number of Cascade Lake Xeon dies together on a single package with the blue team’s Ultra Path Interconnect, or UPI. Intel will allow Cascade Lake-AP servers to employ up to two-socket (2S) topologies, for as many as 96 cores per server.”

In this video from SC18, Raj Hazra describes how Intel is driving the convergence of HPC and Ai. “To meet the new computational challenges presented by this AI and HPC convergence, HPC is expanding beyond its traditional role of modeling and simulation to encompass visualization, analytics, and machine learning. Intel scientists and engineers will be available to discuss how to implement AI capabilities into your current HPC environments and demo how new, more powerful HPC platforms can be applied to meet your computational needs now and in the future.”

“Cloud computing offers a potential solution by allowing people to create and access computing resources on demand. Yet meeting the complex software demands of an HPC application can be quite challenging in a cloud environment. In addition, running HPC workloads on virtualized infrastructure may result in unacceptable performance penalties for some workloads. Because of these issues, relatively few organizations have run production HPC work- loads in either private or public clouds.”

In this video from SC18 in Dallas, Trish Damkroger describes how Intel is pushing the limits of HPC and Machine Learning with a full suite of Hardware, Software, and Cloud technologies. ”
Today’s high performance computers are unleashing discovery and insights at an unprecedented pace. The intersection of artificial intelligence and HPC has the potential to transform industries from life sciences to manufacturing, while solving some of the toughest challenges in our world. At SC18, HPC users got to experience how Intel’s holistic portfolio of products is transforming HPC from traditional modeling and simulation to visualization, analytics, and artificial intelligence.”

“The emerging AI community on HPC infrastructure is critical to achieving the vision of AI,” said Pradeep Dubey, Intel Fellow. “Machines that don’t just crunch numbers, but help us make better and more informed complex decisions. Scalability is the key to AI-HPC so scientists can address the big compute, big data challenges facing them and to make sense from the wealth of measured and modeled or simulated data that is now available to them.”

Today NVIDIA showcased its HPC leadership in the TOP500 list of the world’s fastest supercomputers. The closely watched list shows a 48 percent jump in one year in the number of systems using NVIDIA GPU accelerators. The total climbed to 127 from 86 a year ago, and is three times greater than five years ago. “With the end of Moore’s Law, a new HPC market has emerged, fueled by new AI and machine learning workloads. These rely as never before on our high performance, highly efficient GPU platform to provide the power required to address the most challenging problems in science and society.”

Latest Video

Industry Perspectives

In this special guest post, Axel Huebl looks at the TOP500 and HPCG with an eye on power efficiency trends to watch on the road to Exascale. "This post will focus one efficiency, in terms of performance per Watt, simply because system power envelope is a major constrain for upcoming Exascale systems. With the great numbers from TOP500, we try to extend theoretical estimates from theoretical Flop/Ws of individual compute hardware to system scale." [Read More...]

White Papers

Mixing workloads rather than creating separate application domains is key to efficiency and productivity. Specific software is typically needed only in certain phases of product development, leaving systems idle the rest of the time. Download the insideHPC guide that explores how a powerful scheduling and resource management solution — such as Bright Cluster Manager — can slot other workloads into those idle clusters, thereby gaining maximum value from the hardware and software investment, and rewarding IT administrators with satisfied users.