In this special guest feature from Scientific Computing World, Robert Roe looks at advances in exascale computing and the impact of AI on HPC development. “There is a lot of co-development, AI and HPC are not mutually exclusive. They both need high-speed interconnects and very fast storage. It just so happens that AI functions better on GPUs. HPC has GPUs in abundance, so they mix very well.”

This sponsored post from Intel explores how the Intel Rendering Framework, which brings together a number of optimized, open source rendering libraries, can deliver better performance at a higher degree of fidelity — without having to invest in extra hardware. By letting the CPU do the work, visualization applications can run anywhere without specialized hardware, and users are seeing better performance than they could get from dedicated graphics hardware and limited memory.

Today AMD announced a new exascale-class supercomputer to be delivered to ORNL in 2021. Built by Cray, the “Frontier” system is expected to deliver more than 1.5 exaFLOPS of processing performance on AMD CPU and GPU processors to accelerate advanced research programs addressing the most complex compute problems. “The combination of a flexible compute infrastructure, scalable HPC and AI software, and the intelligent Slingshot system interconnect will enable Cray customers to undertake a new age of science, discovery and innovation at any scale.”

The impact of AI is just starting to be realized across a broad spectrum of industries. Tim Miller, Vice President Strategic Development at One Stop Systems (OSS), highlights a new approach — ‘AI on the Fly’ — where specialized high-performance accelerated computing resources for deep learning training move to the field near the data source. Moving AI computation to the data is another important step in realizing the full potential of AI.

HPC is no longer just HPC, but rather a mix of workloads that instantiate the convergence of AI, traditional HPC modeling and simulation, and HPDA (High Performance Data Analytics). Exit the traditional HPC center that just runs modeling and simulation and enter the world that must support the convergence of HPC-AI-HPDA computing, and sometimes with specialized hardware. In this sponsored post, Intel explores how HPC is becoming “more than just HPC.”

Martin Rieger from Penguin Computing gave this talk at the HPC User Forum. “Built on a secure, high-performance bare-metal server platform with supercomputing-grade, non-blocking InfiniBand interconnects infrastructure, Penguin on Demand can handle the most challenging simulation and analytics. But, because of access via the cloud (from either a traditional Linux command line interface (CLI) or a secure web portal) you get both instant accesses and extreme scalability — without having to invest in on-premise infrastructure or the associated operational costs.”

Mike Heroux from Sandia National Labs gave this talk at the HPC User Forum. “The Exascale Computing Project is accelerating delivery of a capable exascale computing ecosystem for breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security.The goal of the ECP Software Technology focus area is to develop a comprehensive and coherent software stack that will enable application developers to productively write highly parallel applications that can portably target diverse exascale architectures.”

In this video from GTC 2019 in San Jose, Harvey Skinner, Distinguished Technologist, discusses the advent of production AI and how the HPE AI Data Node offers a building block for AI storage. “The HPE AI Data Node is a HPE reference configuration which offers a storage solution that provides both the capacity for data, as well as a performance tier that meets the throughput requirements of GPU servers. The HPE Apollo 4200 Gen10 density optimized data server provides the hardware platform for the WekaIO Matrix flash-optimized parallel file system, as well as the Scality RING object store.”

Computer systems are about to get a whole lot faster. This year starting at the high end of the market a transition will begin toward systems based on PCI Express 4.0. The interconnect speed will double to 64GB/sec in a 16 lane connection. Tim Miller, Vice President Strategic Development for One Stop Systems, explores the expected speed and innovation stemming from the introduction of PCI Express 4.0.

Today NVIDIA announced plans to acquire Mellanox for approximately $6.9 billion. The acquisition will unite two of the world’s leading companies in HPC. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker.

Latest Video

Industry Perspectives

Often, it’s not enough to parallelize and vectorize an application to get the best performance. You also need to take a deep dive into how the application is accessing memory to find and eliminate bottlenecks in the code that could ultimately be limiting performance. Intel Advisor, a component of both Intel Parallel Studio XE and Intel System Studio, can help you identify and diagnose memory performance issues, and suggest strategies to improve the efficiency of your code. [READ MORE…]

White Papers

Today, through their ability to adapt, solve problems and simulate human intelligence, AI-based applications are being used across industries and sectors to supplement human ability. Download the new special report from insideHPC, “Augmented Intelligence in Government,” brought to you by Dell, to discover the latest technologies that underpin AI, explore current machine learning applications in government, learn from real-world successes, and see how government agencies can benefit from AI.