You are here

Advancing Network Features Of Petascale Computers

Message Passing Interface (MPI) is the dominant parallel computing model on supercomputers today, including petascale systems that are capable of executing one quadrillion operations per second. MPI allows the thousands of nodes in these large clusters to “talk” with one another over high-speed, internal networks, such as InfiniBand and high-speed Ethernet.

Dhabaleswar K. Panda, professor of computer science and engineering at The Ohio State University (OSU), is investigating how these next-generation systems can provide topology, routing and status information, network features that can improve performance and scalability for many applications.

“This project will have significant impact in deriving guidelines for designing, deploying and using next generation petascale systems,” said Panda. “This study involves National Science Foundation researchers from OSU, Texas Advanced Computing Center (TACC), University of California – San Diego and San Diego Supercomputer Center operating large-scale simulations on TACC’s Ranger system and other supercomputers.”

In another related project, the team is studying MPI-2 one-sided communication operations, to improve scaling and performance on petascale systems. The researchers are investigating methods to couple one-sided communication with hardware support from InfiniBand and leverage them in scientific applications. As a part of this project, the team is utilizing and enhancing MVAPICH2 software, a very popular MPI-2 implementation on InfiniBand, 10 GE iWARP and RoCE.

Karen Tomko, an Ohio Supercomuter Center senior systems developer/engineer, is supporting Panda’s team on both projects, providing expertise with the MPI library and scientific applications and helping facilitate production-level testing.