A recent report from a nonpartisan working group has presented a well articulated argument for the reinvention of the U.S. National Labs so they can effectively deal with the challenges of the upcoming exascale and zettascale decades.

The working group consists of The Information Technology and Innovation Foundation, The Center for American Progress, and The Heritage Foundation.

The Exascale Report recently reported on the daunting challenges facing the new Secretary of Energy, Ernest Moniz. Many in the HPC community hope he will establish a strong leadership position for DOE. There is deep-rooted concern throughout the community as the nation’s National Lab researchers and scientists watch the U.S. position of global technology leader give way to Chinese ingenuity, determination, and overwhelming technology research and development budgets.

“We’ve faced technology transitions and what some have called paradigm shifts before – and did those with rather impressive results. The big difference this time around is the leadership and their attitude(s), or lack thereof, toward stepping on toes and fighting for a longer-term U.S. exascale initiative with funding that will keep the U.S. competitive.”

“It seems that, in order to understand the Application Challenges of the next 7-10 years, we need to have a handle on what the new technologies might be that developers will have to work with. Yet it seems like almost every aspect – every echnical details – related to the technologies of the future, particularly with exascale are up in the air right now. Have you determined some approaches that application developers could actually be using today to insure they have code that will be scalable on exaflops machines?”

The Exascale Report staff will provide exclusive, in-depth coverage of the House Committee on Science, Space, and Technology’s Subcommittee on Energy’s May 22nd hearing on the topic of exascale computing.

“Computational science and engineering in general and the mission of leadership computing, in particular, is very important for U.S. technology leadership. The impact is both very broad and deep. It is difficult to imagine that a country can have leadership in science and engineering, and, as a result, define the leading edge in innovative technologies without being truly excellent in computational science. This importance has been realized in many ways, from the Department of Energy’s national leadership in computational science and engineering, to the broad bi-partisan support for supercomputing in federal R&D budgets, to the growing utilization of supercomputing by industry. Titan, specifically, is a major step on the road to maintain expected growth rates in performance while “changing the game” with respect to energy efficiency for supercomputers based on commodity hardware components.”

“When we integrate the fabric structure for future HPC systems, there are a number of benefits we can discuss from an end-user perspective. Fundamentally it’s power, cost and capability. First, by bringing fabric integration to work-class silicon, we give it the advantage of Moore’s Law – an advantage it might not have elsewhere. The densest transistors in the world, the best performing logic transistors, are now also the transistors driving more complicated logic that drives the NIC and the interconnect. This has a direct impact on both the power requirements and the total cost.”

The HPC industry is involved in a worldwide arms race to build Exascale systems. The next generation of HPC systems will be much more challenging for users, with millions of heterogeneous processor cores, complex memory hierarchies and different programming approaches. The UK is addressing these issues to keep its industry at the forefront of HPC use. In recent years the UK government has been lobbied by the HPC community to fund systems to help smooth the transition towards the next generation of HPC hardware and software. The UK e-Infrastructure has been refreshed in order to support academic use, and to increase economic output through the industrial exploitation of HPC. Using large scale HPC facilities can enable scientists and engineers to do things that were not possible before, such as adding new capabilities to an application, or increasing the fidelity of modelling used – which can in turn lead to the development of better, lighter, stronger products that are less expensive to manufacture.

“When we talk about what we can do to the healthcare industry, thanks to the investments in supercomputing, that’s what I think will be the big game changer. The application of the technology that we are all collectively investing in, and applying that technology to real-world change.”

The technical presentations and demonstrations depicting the work and accomplishments of the DOD and DOE researchers, typically seen at the SC conferences, were sorely missed at SC12. What can we expect for SC13?

Will the DOE and DOD be DOA at SC13?

In a recent random poll, (OK, it was in December, 2012) one of the questions we asked was:

Latest Video

Industry Perspectives

"InfiniBand's advantages of highest performance, scalability and robustness enable users to maximize their data center return on investment. InfiniBand was chosen by far more end-users compared to a proprietary offering, resulting in a more than 85 percent market share. We are happy to see our open Ethernet adapter and switch solutions enable all of the 40G and the first 100G Ethernet systems on the TOP500 list, resulting in overall 194 systems using Mellanox for their compute and storage connectivity." [Read More...]

White Papers

This white paper reviews common HPC-environment challenges and outlines solutions that can help IT professionals deliver best-in-class HPC cloud solutions—without undue stress and organizational chaos.