Stay connected, up-to-date, and informed on all things parallel development via Go Parallel, where you'll find viewpoints, how-to's, software tools, and educational information to help your software development work shine. http://goparallel.sourceforge.net/

The Department of Energy (DOE) announced a $19 million award to Intel Federal to develop exascale processors, next-generation memories, and ultra-fast input/output (I/O) technologies for Lawrence Livermore National Security’s (LLNS’s) Extreme-Scale Computing Research and Development “FastForward” program.

By enlisting broad cooperative contributions from industry, academia, and other national laboratories, DOE’s LLNS aims to develop all the necessary high-performance computing (HPC) capabilities needed to achieve exascale systems by 2020.

1

<em>Asthe first member of its Many Integrated Core(MIC)architecture,Intel'sXeon Phi coprocessor board makes useof its22-nanometer3-DTri-gate transistors.Source:Intel<strong>&reg;</strong></em>

“Public-private partnerships will significantly help move high-performance computing forward [allowing] current and future generations of scientists and engineers to develop breakthrough advancements,” says David Patterson, president of Intel Federal LLC.

DOE also has announced awards to AMD, EMC, HDF Group, Nvidia, and Whamcloud. All contracts aim to help LLNS develop the necessary computing infrastructure to meet its mandates of applying exascale computing technologies to aid the economy, increase security, shepherd the aging U.S. nuclear arsenal, and optimize energy consumption.

“Exascale systems are critical for achieving the Department of Energy’s goals – to ensure national security and promote scientific advancements,” states William Harrod, Division Director of Research in the DOE Office of Science’s Advanced Scientific Computing Research. “From long-term weather forecasting and developing drugs for the most severe diseases to analyzing new ways to use energy efficiently, science and engineering researchers need much more computing capacity than is available today in petascale systems.”

Exascale computing requires a 1000-fold increase in computing power over today’s petascale systems, but aims to achieve those goals with only a modest increase in power consumption to about 20 megawatts. As a result, all processor, memory, and I/O speedups will have to be achieved with more innovative approaches than merely turning up the clock speed. LLNS’s FastForward program is aimed at cultivating all the technologies needed to enable exascale HPCs within its trim energy budget.

Intel Federal’s twofold contract will apply the whole range of Intel’s capabilities, from basic research to development to prototyping to systems integration. Besides its MIC architecture and Xeon Phi massively parallel coprocessors, Intel also will enlist its 3-D memory technology developed with Micron Technology, called the Hybrid Memory Cube, as well as its high-speed interconnect technologies acquired earlier this year: Infiniband’s switched fabric network acquired from QLogic, and the Gemini interconnect technology acquired from Cray.

________________________________________________________________

Colin Johnson is a Geeknet contributing editor and veteran electronics journalist, writing for publications from McGraw-Hill’s Electronics to UBM’s EETimes. Colin has written thousands of technology articles covered by a diverse range of major media outlets, from the ultra-liberal National Public Radio (NPR) to the ultra-conservative Rush Limbaugh Show. A graduate of the University of Michigan’s Computer, Control and Information Engineering (CICE) program, his master’s project was to “solve” the parallel processing problem 20 years ago when engineers thought it would only take a few years. Since then, he has written extensively about the challenges of parallel processors, including emulating those in the human brain in his John Wiley & Sons book Cognizers – Neural Networks and Machines that Think.

DoE's biggest concern in its call for proposals in its "FastForward" program for exascale supercomputing is to cut the power-to-performance ratio by a factor of five. Here is how DoE's request-for-proposals puts it: "To get the additional factor of five improvements in power efficiency over projections, a number of technical areas in hardware design need to be explored. These may include: energy efficient hardware building blocks--central processing units, memory, interconnect--novel cooling, and packaging, silicon-photonic communication, and power-aware runtime software and algorithms." It also promised follow-on funding to make sure it is developing every technology necesary to realize supercomputers with exaFLOPS of performance, but which only consume 20megaWatts of power, by 2020.

Use this MPI library for better application performance on Intel® architecture-based clusters by implementing the high-performance MPI-2 specification on multiple fabrics. Quickly deliver maximum end-user performance even with new interconnects—without requiring major changes to the software or operating environment.