Moving HPC Workloads to the Cloud with Avere Systems at SC17

In this video from SC17 in Denver, Bernie Behn from Avere Systems describes how the company helps customers migrate HPC workloads to the Cloud.

“HPC workloads are incredibly large, encompassing datasets as large as several petabytes. With matching storage and compute requirements, organizations are determining how to best use the vast resources offered by cloud service providers to fill any gaps. However, large file sizes create difficulties when trying to move HPC data to these remote resources. Avere Systems helps solve these challenges to make HPC in the cloud a viable option.”

Traditional methods of moving data are expensive and very time-consuming. These processes often negate the value-add that the cloud offers. Moving all of your data to the cloud is not necessary in order to use cloud compute for an individual application’s workload. In fact, you don’t need to move large data sets at all. Cloud caching filers can often take on the data required to run each job, putting the data migration portion all onto this caching appliance.

The large datasets do not to leave your data center. The necessary HPC data (a small percentage of its total) is migrated via the caching filer to the application running in the cloud, where it is then used by the app. Then once it is finished, the filer sends that data back to its on-prem location.

If you were using a typical model that was entirely on-prem, you would need to move the data to free the local machine so that it can do the next run. With the cloud, you are able to deploy and tear down resources on-demand as you need them. Once your workloads are finished running, you can stop the billing for your compute usage, and at the same time you haven’t had to purchase additional hardware.

Resource Links:

Latest Video

Industry Perspectives

In this Let's Talk Exascale podcast, Tapasya Patki of Lawrence Livermore National Laboratory dicusses ECP’s Power Steering Project. "Efficiently utilizing procured power and optimizing the performance of scientific applications at exascale under power and energy constraints are challenging for several reasons. These include the dynamic behavior of applications, processor manufacturing variability, and increasing heterogeneity of node-level components." [Read More...]

White Papers

This paper is intended to inform technical and business readers with I/O-intensive computing environments about a new software solution that manages at-scale flash to eliminate file system bottlenecks and voids old rules for application I/O optimization. Download now to learn more.