OpenACC Talks Up Summit and Community Momentum at SC18

By John Russell

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC impact at SC18. Most noteworthy is that five of 13 CAAR applications optimized for the Summit supercomputer used OpenACC to accelerate performance. The CAAR (Center for Accelerated Application Readiness) program at Oak Ridge National Laboratory was established to prepare applications to take advantage of Summit, now the fastest supercomputer in the world.

OpenACC also introduced a new release (2.7); reported there have been roughly 130,000 downloads of the free community OpenACC edition from PGI; and said more than 150 HPC applications, including five important commercial apps, have now been accelerated by OpenACC. Overall, the milestones reported by OpenACC suggest strength.

You may remember OpenACC was developed by Cray, Nvidia, Caps Enterprise, and PGI and the first spec (1.0) was delivered at SC11 and targeted GPUs. It followed OpenMP (1997) which at the time performed a similar function but focused on host x86 CPUs. The growing use of parallel programing tools such as OpenACC and OpenMP has tracked the steady rise of accelerator-based heterogeneous computing in recent years. Many wonder whether/when OpenACC and OpenMP will merge into a single spec; this was an often-stated goal at OpenACC’s start and has been talked about ever since. (See HPCwire article, NVIDIA Eyes Post-CUDA Era of GPU Computing). The slide below summarizes OpenACC’s SC18 announcements.

Summit is a good example of the emerging CPU/GPU hybrid compute architecture paradigm in which the CPU is largely a supervisor and the speed-up derives from parallel processing in GPUs. Summit nodes, for example, consists of two IBM Power9 CPUs, six NVIDIA V100 GPUs.

OpenACC is understandably excited by its success with the CAAR applications. Parallelizing and scaling these codes to work on such a large, novel system is a challenge. In its official SC18 announcement, OpenACC offered testimonials from leaders of the CAAR efforts which used OpenACC:

Energy Exascale Earth System Model (E3SM) used for high -resolution simulation of the global coupled climate system. “The CAAR project provided us with early access to Summit hardware and access to PGI compiler experts. Both of these were critical to our success. PGI’s OpenACC support remains the best available and is competitive with much more intrusive programming model approaches.” – Mark A. Taylor, Multiphysics Applications, Sandia National Laboratory

LSDalton used in quantum chemistry. “Using OpenACC, we see large performance gains with very little effort. GPU acceleration varies over the course of our simulations due to differing fragment sizes, but is typically 3x–5x. On Summit we can now do simulations of several thousand atoms, compared to maybe 800 on Titan.” – Dmytro Bykov, Computational Scientist, Oak Ridge National Laboratory

FLASH used for Astrophysics. “We’re using OpenACC on Summit to accelerate our most compute-intensive kernels. We love OpenACC interoperability and how this allows us to use multiple methods to perform memory placement and movement. CPU+GPU performance of a 288 species network on Summit, something impossible to do on Titan, is 2.9x faster than CPU only.” – Bronson Messer, Senior Scientist, Oak Ridge National Laboratory

GTC used for particle turbulence simulations for sustainable fusion reactions in ITER. “Using OpenACC, our scientists were able to achieve the acceleration needed for integrated fusion simulation with a minimum investment of time and effort in learning to program GPUs.” – Zhihong Lin, Professor and Principal Investigator, UC Irvine

Half in jest, Sunita Chandrasekaran, director of user adoption for OpenACC, told HPCwire the organization is now looking to be part of an effort that wins a Gordon Bell Prize. It turns out one of the finalists this year (University of Tokyo, Earthquake simulation) did use OpenACC to start with in their submission but switched to CUDA for the final submission to get the most performance.

It’s probably fair to say the announced 2.7 release is incremental. It adds ‘self’ clause capability on compute constructs enabling use of both multicore and accelerator in the same program without dynamically changing the device. Among other changes are the addition of ‘readonly’ modifier in the ‘copyin’ and ‘cache’ functionality, and arrays are now allowed in reductions for C/C++ struct or class and Fortran derived types.

Looking ahead, Chandrasekaran said, “We have been talking about deep copy [and] there is interest from application developers for that.” A beta deep copy feature is being worked on and is available newly released PGI 18.10 Community Edition. Chandrasekaran declined to discuss other planned features.

SUSE, C-DAC and Osaka University are the three most recent institutions to join the OpenACC membership. OpenACC says the new members will “contribute to technical and marketing committees, shape the OpenACC specification to support their research and will help grow a community of OpenACC users who aim to perform more science and research and less programming effort.”

OpenACC continues to ramp up its outreach efforts. “GPU Hackathons which started as OpenACC-only events under Oak Ridge National Laboratory umbrella have now grown into a series of events with 160 teams participating from all around the world. A majority of the teams choose OpenACC to start programming GPU, but any GPU programming models and tools are welcome at the events,” according to the release.

Seeking to reign in the tediousness of manual software testing, Pfizer HPC Engineer Shahzeb Siddiqui is developing an open source software tool called buildtest, aimed at automating software stack testing by providing the community with a central repository of tests for common HPC apps and the ability to automate execution of testing. Read more…

By Tiffany Trader

In just a few months time, Senegal will be operating the second largest HPC system in sub-Saharan Africa. The Minister of Higher Education, Research and Innovation Mary Teuw Niane made the announcement on Monday (Jan. 14 Read more…

By Tiffany Trader

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three big public cloud vendors has by turn touted the latest and Read more…

Previous:

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchmark or suite of benchmarking tools to compare the performanc Read more…

By James Reinders

Quantum computing has lived so long in the future it’s taken on a futuristic life of its own, with a Gartner-style hype cycle that includes triggers of innovation, inflated expectations and – though a useful quantum system is still years away – anticipatory troughs of disillusionment. Read more…

By John Russell

Anyone who has checked a forecast to decide whether or not to pack an umbrella knows that weather prediction can be a mercurial endeavor. It is a Herculean task: the constant modeling of incredibly complex systems to a high degree of accuracy at a local level within very short spans of time. Read more…

By John Russell

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By John Russell

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…