This week’s landmark discovery of gravitational and light waves generated by the collision of two neutron stars eons ago was made possible by a signal verification and analysis performed by Comet, an advanced supercomputer based at SDSC in San Diego. “LIGO researchers have so far consumed more than 2 million hours of computational time on Comet through OSG – including about 630,000 hours each to help verify LIGO’s findings in 2015 and the current neutron star collision – using Comet’s Virtual Clusters for rapid, user-friendly analysis of extreme volumes of data, according to Würthwein.”

Researchers are using new techniques with HPC to learn more about how the West Nile virus replicates inside the brain. “Over several years, Demeler has developed analysis software for experiments performed with analytical ultracentrifuges. The goal is to facilitate the extraction of all of the information possible from the available data. To do this, we developed very high-resolution analysis methods that require high performance computing to access this information,” he said. “We rely on HPC. It’s absolutely critical.”

Today the San Diego Supercomputer Center announced that Christopher Irving will be the new manager of the Center’s High-Performance Computing systems, effective June 1, 2017. “Christopher has been involved in the many facets of deploying and supporting both our Gordon and Comet supercomputers, so this appointment is a natural fit for all of us,” said Amit Majumdar, director of SDSC’s Data Enabled Scientific Computing division. “He also has been coordinating closely with our User Services Group in his previous role, so he’ll now officially oversee SDSC’s high level of providing HPC and data resources for our broad user community.”

The San Diego Supercomputer Center has been granted a supplemental award from the National Science Foundation to double the number of GPUs on its petascale-level Comet supercomputer. “This expansion is reflective of a wider adoption of GPUs throughout the scientific community, which is being driven in large part by the availability of community-developed applications that have been ported to and optimized for GPUs,” said SDSC Director Michael Norman, who is also the principal investigator for the Comet program.

Today the San Diego Supercomputer Center (SDSC) announced that the comet supercomputer has easily surpassed its target of serving at least 10,000 researchers across a diverse range of science disciplines, from astrophysics to redrawing the tree of life. “In fact, about 15,000 users have used Comet to run science gateways jobs alone since the system went into production less than two years ago.”

Rick Wagner from SDSC presented this talk at the the 4th Annual MVAPICH User Group. “At SDSC, we have created a novel framework and infrastructure by providing virtual HPC clusters to projects using the NSF sponsored Comet supercomputer. Managing virtual clusters on Comet is similar to managing a bare-metal cluster in terms of processes and tools that are employed. This is beneficial because such processes and tools are familiar to cluster administrators.”

“We are pioneering the area of virtualized clusters, specifically with SR-IOV,” said Philip Papadopoulos, SDSC’s chief technical officer. “This will allow virtual sub-clusters to run applications over InfiniBand at near-native speeds – and that marks a huge step forward in HPC virtualization. In fact, a key part of this is virtualization for customized software stacks, which will lower the entry barrier for a wide range of researchers by letting them project an environment they already know onto Comet.”

The NSF-funded Comet supercomputer at SDSC was one of several high-performance computers used by researchers to help confirm that the discovery of gravitational waves before a formal announcement was made.

“Over the last several years, an enormous amount of development effort has gone into Lustre to address users’ enterprise-related requests. Their work is not only keeping Lustre extremely fast (the Spider II storage system at the Oak Ridge Leadership Computing Facility (OLCF) that supports OLCF’s Titan supercomputer delivers 1 TB/s ; and Data Oasis, supporting the Comet supercomputer at the San Diego Supercomputing Center (SDSC) supports thousands of users with 300GB/s throughput) but also making it an enterprise-class parallel file system that has since been deployed for many mission-critical applications, such as seismic processing and analysis, regional climate and weather modeling, and banking.”

The San Diego Supercomputer Center is adding 800GB Seagate SAS SSDs to significantly boost the data analytics capability of its Comet supercomputer. To expand its node-local storage capacity for data-intensive workloads, device pairs will be added to all 72 compute nodes in one rack of Comet, alongside the existing SSDs. This will bring the flash storage in a single node to almost 2TB, with total rack capacity at more than 138TB.

Latest Video

Industry Perspectives

In this Let's Talk Exascale podcast, Tapasya Patki of Lawrence Livermore National Laboratory dicusses ECP’s Power Steering Project. "Efficiently utilizing procured power and optimizing the performance of scientific applications at exascale under power and energy constraints are challenging for several reasons. These include the dynamic behavior of applications, processor manufacturing variability, and increasing heterogeneity of node-level components." [Read More...]

White Papers

Artificial intelligence has already had a profound effect on many industries. But for the healthcare sector, this collection of technologies is proving to be nothing short of transformative. Download the new report from HPE that explores how tools like GPUs and deep learning platforms are changing and progressing healthcare.