Intel has a long history of making important announcements at the annual Supercomputer shows, and this year was no exception. This guest post from Intel covers what new technology was front and center from Intel at SC18, including its Cascade Lake advanced performance processors, Intel Optane Persistent Memory and more. Learn more about these new technologies designed to accelerate the convergence of high-performance computing and AI.

We are please to announce that our friends at the UberCloud won a number of prestigious awards for HPC in the Cloud. “At Hyperion Research’s HPC Market Update Breakfast Briefing, UberCloud and partners received the Innovation Excellence Award for its UberCloud Experiment and case study #200 based on “Computer simulations of non-invasive transcranial electro-stimulation of the human brain in schizophrenia.”

In this video from SC18, Naoki Shibata from XTREME-D describes the company’s new HPC-as-a-Service offering. “Customers can use our easy-to-deploy turnkey HPC cluster system on public cloud, including setup of HPC middleware (OpenHPC-based packages), configuration of SLURM, OpenMPI, and OSS HPC applications such as OpenFOAM. The user can start the HPC cluster (submitting jobs) within 10 minutes on the public cloud. Our team is a technical startup for focusing HPC cloud technology. Our team has the optimal skill set for HPC architecture, public cloud architecture, rapid development of web applications for HPC, and Data Analytics for developing automated HPC architectural services.”

Today the AWS cloud rolled Amazon EC2 A1 Instances powered by new Arm-based AWS Graviton Processors. That’s right, that clever fellow Jeff Bezos is making his own ARM chips. “ç processors are a new line of processors that are custom designed by AWS utilizing Amazon’s extensive expertise in building platform solutions for cloud applications running at scale. These processors deliver targeted power, performance, and cost optimizations.”

The Accenture Engineering Compute (AEC) digital platform is a first mover in the industry with a highly differentiated AIML/analytics based Job-resource prediction and orchestration engine to ensure the most efficient placement of HPC workloads on-prem or in the cloud. The solution delivers dramatic gains in capital equipment utilization (~2x) , job throughput (~30%) and business agility with automated cloud bursting for HPC/Grid computing environments.

Last week at SC18 in Dallas, Univa announced a partnership with WekaIO, a high-performance scale-out file system storage company, to help enterprise customers accelerate the migration of their HPC workloads to the cloud. “Univa is working with WekaIO to integrate one of the industry’s fastest parallel file systems into its Navops Launch and offer customers a comprehensive, high-performance, hybrid cloud solution for HPC and machine learning workloads.”

Two months after its introduction, the NVIDIA T4 GPU is featured in 57 separate server designs from the world’s leading computer makers. It is also available in the cloud, with the first availability of the T4 for Google Cloud Platform customers. “Just 60 days after the T4’s launch, it’s now available in the cloud and is supported by a worldwide network of server makers. The T4 gives today’s public and private clouds the performance and efficiency needed for compute-intensive workloads at scale.”

In this video from SC18, Raj Hazra describes how Intel is driving the convergence of HPC and Ai. “To meet the new computational challenges presented by this AI and HPC convergence, HPC is expanding beyond its traditional role of modeling and simulation to encompass visualization, analytics, and machine learning. Intel scientists and engineers will be available to discuss how to implement AI capabilities into your current HPC environments and demo how new, more powerful HPC platforms can be applied to meet your computational needs now and in the future.”

Latest Video

Industry Perspectives

At SC18 in Dallas, I had a chance to catch up with Gary Grider from LANL. “So we’re forming a consortium to chase efficient computing. We see many of the HPC sites today seem to be headed down the path of buying machines that work really well with very dense linear algebra problems. The problem is: hardcore simulation can often not be a great fit on machines built for high Linpack numbers.” [READ MORE…]

White Papers

The use of Co-Design and offloading are important tools in achieving Exascale computing. Application developers and system designers can take advantage of network offload and emerging co-design protocols to accelerate their current applications. Adopting some basic co-design and offloading methods to smaller scale systems can achieve more performance on less hardware resulting in low cost and higher throughput. Learn more by downloading this guide.