The Week in HPC Research

Tiffany Trader

We’ve scoured the journals and conference proceedings to bring you the top research stories of the week. This diverse set of items includes the latest CAREER award recipient; the push to bring parallel computing to the classroom; HPC in accelerator science; the emerging Many-Task Computing paradigm; and a unified programming model for data-intensive computing.

Palermo Wins NSF CAREER Award

Texas A&M University researcher Dr. Sam Palermo has won the prestigious CAREER award for his proposal, “Process, Voltage, and Temperature (PVT)-Tolerant CMOS Photonic Interconnect Transceiver Architectures.”

An assistant professor in the Department of Electrical and Computer Engineering at Texas A&M University, Palermo developed energy-efficient transceivers for a unified inter- and intra-chip photonic interconnect architecture. This is significant because conventional off-chip electrical interconnects will not be able to increase their pin-bandwidths significantly due to channel-loss limitations, but silicon photonic interconnects display distance-independent connectivity, where pin-bandwidth scales with the degree of wavelength-division multiplexing (WDM).

The project sets the stage for an explosion in interconnect bandwidth capacity, according to Palermo. He believes that these photonic interconnect architectures have the power to revolutionize a wide-range of computational devices. Future smart mobile devices capable of terascale performance, multi-channel high-resolution magnetic resonance imaging, and even exascale supercomputing are all potential targets. What’s more the proposed technology reduces the energy demand of these complex systems.

The CAREER award was established by the National Science Foundation to recognize junior faculty to advance the discovery process and inspire game-changing thinking.

The large number of research pieces dedicated to computer science education this week highlights the importance of an updated and relevant curriculum. To that end, the importance of parallel computing cannot be ignored.

“How can parallel computing topics be incorporated into core courses that are taken by the majority of undergraduate students?” asks a team of researchers from Knox College, Portland State University and Lewis & Clark College.

Their paper outlines the benefits of using GPUs to teach parallel programming. The authors describe how GPU computing with CUDA was brought into the core undergraduate computer organization course at two different colleges.

“We have found that even though programming in CUDA is not necessarily easy, programmer control and performance impact seem to motivate students to acquire an understanding of parallel architectures,” they write.

A North Carolina-based group of researchers add their voice to the discussion, advocating the use of higher-level abstractions to teach parallel computing. They argue that it is no longer feasible for students to be trained solely in the programming of single processor systems. Students at all levels must be taught the essentials of multicore programming.

“The first approach uses a new software environment that creates a higher level of abstraction for parallel and distributed programming based upon a pattern programming approach. The second approach uses compiler directives to describe how a program should be parallelized.”

A team of researchers from the University of Central Florida are also emphasizing the necessity of a computer science curriculum refresh that includes parallel techniques. More specifically, they advise introducing parallel programming across the undergraduate curriculum through an interdisciplinary course on computational modeling.

The core message is the same:

“The end of exponential growth in the computing power available on a single processing element has given birth to an era of massively parallel computing where every programmer must be trained in the art and science of parallel programming,” they write.

Furthermore:

“The construction of computational models has become a fundamental process in the discovery process for all scientific disciplines, and there is little instructional support to enable the next generation of scientists and engineers to effectively employ massively parallel high-performance computing machines in their scientific process.”

The authors argue that because computational modeling straddles several key technology waves, namely big data, computational statistics, and model checking, it is therefore a particularly good choice for introducing today’s students to parallel programming methods.

A collection of researchers from Lawrence Berkeley National Laboratory (LBNL) and Los Alamos National Laboratory (LANL), led by LBNL’s Robert Ryne, have published a poster highlighting the importance of accelerators (think particles not processors) to science.

Particle accelerators contribute to a wide range of disciplines among them materials science, chemistry, biosciences, high energy physics, and nuclear physics. Marvels of engineering, these machines have important applications in energy, the environment, and national security. Their use in drug design and other medical therapies has tremendous value for quality of life.

The progress of accelerator science and technology is directly tied to advances in computer science. Accelerator modeling brings about cost and risk reduction as well as design optimization and the testing of new ideas.

As for future challenges and opportunities, the paper notes the potential for accelerators to be used as light sources in multiple implementations, including advanced injectors, beam manipulation, novel seeding schemes and more. Laser-Plasma Accelerators (LPA’s) are on-track to revolutionize accelerator technology. They have a place in enhanced beam quality control, 10 GeV stages, and one day may lead to an LPA collider. Researchers are also exploring new accelerator designs, for example electron-ion colliders, FRIB, and muon accelerators.

On the computational modeling side, there are challenges with regard to programming at scale and extracting the performance potential of multicore and hybrid machines. Extreme scale computing affects all aspects of HPC: the algorithms, the I/O, data analysis, visualization and more. Other important points raised are using statistical methods for fast emulators and bringing HPC into the control room for near-real-time feedback to experiments.

The traditional classifications of High-Throughput Computing (HTC) and High-Performance Computing (HPC) are no longer adequate, according to a team of researchers from the National Institute of Supercomputing and Networking, Korea Institute of Science and Technology Information (KISTI). The reason? An emerging set of applications that require millions or even billions of tasks (communicating each other through files) to be processed with relatively short per task execution times. The researchers refer to this new application segment as Many-Task Computing.

Traditional middleware systems that have been widely used in HTC or HPC are not suitable for supporting MTC applications, therefore a new protocol is needed to bridge the gap between HTC and HPC, they argue. They’ve authored a paper describing the key MTC characteristics and presenting a middleware system to fully support these applications.

Some of the unique characteristics of this new computing paradigm are as follows:

A very large number of tasks (i.e. millions or even billions of tasks).

Relatively short per task execution times (i.e. seconds to minutes long).

Data intensive tasks (i.e., tens of MB of I/O per CPU second).

A large variance of task execution times (i.e., ranging from hundreds of milliseconds to hours).

Communication-intensive, however, not based on message passing interface (such as MPI) but through files.

“We hope our research can give an insight for a next generation distributed middleware system that can support the most challenging scientific applications,” they write.

Faced with the need to process large volumes of data, researchers have several computational paradigms to select from, including batch processing, iterative, interactive, memory-based, data flow oriented, relational, structured, among others. These different techniques are mostly incompatible with each other, but what if there was a unified framework that could support these different approaches? That’s exactly what research duo Maneesh Varshney and Vishwa Goudar from the Computer Science Department of the University of California, Los Angeles, had in mind when they developed Blue.

The researchers lay out their findings in a new technical report, “Blue: A Uniﬁed Programming Model for Diverse Data-intensive Cloud Computing Paradigms.”

They write: “The motivation for this paper is to ease the development of new cluster applications, by introducing an intermediate layer (Figure 1) between resource management and applications. This layer [serves as] a generic programming model upon which any arbitrary cluster application can be built. Not only will this significantly diminish the cost of developing applications, the users will be able to easily select the computation paradigm that best meets their needs.”

In developing the Blue framework and programming model, the researchers aimed for a solution that was neither too low-level and difficult to implement, nor too high-level and fixed. The paper includes an outline for implementation strategy, and points out the framework’s key strengths (notably efficiency and fault-tolerance for cluster programs) and limitations (while it targets data-intensive computational problems, it is not the best choice for task parallelism).

Tabor Communications

HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications Inc. is prohibited.