High Performance Computing

In the last two decades, computing has been established as one of three
methodologies of scientific research: theory, experimentation, and
computational validation. However, exploiting hardware performance in science
applications grew to be increasingly difficult. Additionally, approaches to
grand challenge problems have historically focused on first-principle
simulations, which start with a mathematical model, often expressed in terms of
system of equations.

But the nation is facing problems that cannot be solved in the reasonable
future using first-principle approaches. In biology, experimental data must
infer biological networks and pathways. In national security, social networks
must be derived from intelligence data. In energy supply, sensor networks must
build real-time models of the national power grid network. These applications
require a different system architecture to enable data-intensive computing.

Data-intensive computing starts from analysis and interpretation of massive
amounts of data—data that are needed to build models and constrain the space of
feasible models that make simulations computationally tractable. These data
sets are much too large for effective storage, manipulation, archiving,
navigation, visualization, and understanding.

PNNL is successful in high performance computing by merging multiple areas
of science and technology.

Our researchers are enabling high performance computing for solving
scientific problems by developing and implementing high-level programming
abstractions. For example, our Global Arrays Toolkit provides a high-level,
easy-to-use programming model with abstractions suitable for the science
domains it targets. We are also developing problem-solving environments to
increase ease of use and availability of high performance computing to
non-specialists.