Setting up an environment for High Performance Computing (HPC) especially using GPUs can be daunting. There can be multiple dependencies, a number of supporting libraries required, and complex installation instructions. NVIDIA has made this easier with the announcement and release of HPC Application Containers with the NVIDIA GPU Cloud.

Yale University is seeking a Sr. Linux System Administrator in our Job of the Week. “In this role, you will work as a Linux senior administrator in ITS Systems Administration. Provide leadership in Linux server administration, for mission-critical services in a dynamic, 24/7 production data center environment.”

Researchers are using supercomputers to introduce and assess the impact of different configurations of defects on the performance of a superconductor. “When people think of targeted evolution, they might think of people who breed dogs or horses,” said Argonne materials scientist Andreas Glatz, the corresponding author of the study. ​“Ours is an example of materials by design, where the computer learns from prior generations the best possible arrangement of defects.”

The European ETP4HPC initiative has published a blueprint for the new Strategic Research Agenda for High Performance Computing. “This blueprint sketches the big picture of the major trends in the deployment of HPC and HPDA methods and systems, driven by the economic and societal needs of Europe, taking into account the changes expected in the underlying technologies and the overall architecture of the expanding underlying HPC infrastructure.”

Today ISC 2019 announced the recipients of the ISC Travel Grant this year will be from Columbia and Botswana. The winners will each be awarded a grant of 2500 euros to cover travel expenses and boarding. ISC High Performance will also provide the grant recipients free registration for the entire conference in Frankfurt, Germany.

Pol Forn from the Barcelona Supercomputing Centre gave this talk at the BSC Annual Meeting. “QUANTIC is a joint venture between the Barcelona Supercomputing Center and the University of Barcelona. The research directions are focused on performing quantum computation in a laboratory of superconducting quantum circuits and studying new applications for quantum processors.”

Researchers are using powerful supercomputers at TACC to process data from Gravity Recovery and Climate Experiment (GRACE). “Intended to last just five years in orbit for a limited, experimental mission to measure small changes in the Earth’s gravitational fields, GRACE operated for more than 15 years and provided unprecedented insight into our global water resources, from more accurate measurements of polar ice loss to a better view of the ocean currents, and the rise in global sea levels.”

The HiPEAC 2020 conference has issued its Call for Workshops. The event takes place January 20-22, 2020 in Bologna, Italy. “The HiPEAC conference is the meeting place for computing systems researchers in Europe. Put your research on the map with a paper presentation and get your paper published in the open access journal ACM TACO: Transactions on Architecture and Code Optimization.”

Ian Foster has been selected to receive the 2019 IEEE Computer Society (IEEE CS) Charles Babbage Award for his outstanding contributions in the areas of parallel computing languages, algorithms, and technologies for scalable distributed applications. “Foster’s research deals with distributed, parallel, and data-intensive computing technologies, and innovative applications of those technologies to scientific problems in such domains as materials science, climate change, and biomedicine. His Globus software is widely used in national and international cyberinfrastructures.”

Often, it’s not enough to parallelize and vectorize an application to get the best performance. You also need to take a deep dive into how the application is accessing memory to find and eliminate bottlenecks in the code that could ultimately be limiting performance. Intel Advisor, a component of both Intel Parallel Studio XE and Intel System Studio, can help you identify and diagnose memory performance issues, and suggest strategies to improve the efficiency of your code.

In this special guest feature, Dan Olds from OrionX continues his Epic HPC Road Trip series with a stop at NCAR in Boulder. “Their ability to increase model precision/resolution and to increase throughput at the same time is becoming more difficult over time due to core speed slowing down as more cores are added. In other words, new chips aren’t providing the same increase in performance as we’ve become accustomed to over the years.”

TACC has completed a major upgrade of their Ranch long-term mass data storage system. With thousands of users, Ranch archives are valuable to scientists who want to use the data to help reproduce the measurements and results of prior research. Computational reproducibility is one piece of the larger concept of scientific reproducibility, which forms a cornerstone of […]

Latest Video

Industry Perspectives

Often, it’s not enough to parallelize and vectorize an application to get the best performance. You also need to take a deep dive into how the application is accessing memory to find and eliminate bottlenecks in the code that could ultimately be limiting performance. Intel Advisor, a component of both Intel Parallel Studio XE and Intel System Studio, can help you identify and diagnose memory performance issues, and suggest strategies to improve the efficiency of your code. [READ MORE…]

White Papers

Previously, oil and gas firms relied on costly, central processing unit (CPU) intensive infrastructure to manage data usage and analysis speed. GPUs for oil and gas firms have given rise to a new set of opportunities. Download the new report from Penguin Computing outlines how the adoption of GPU-accelerated computing can offer oil and gas firms significant return on investment (ROI) today and pave the way to gain additional advantage from future technical developments.