RESEARCH NEWS:

Simulations Shed Light on Fate of Sequestered CO2

Researchers suspect that underground, or geologic, carbon sequestration will be key tool in reducing atmospheric CO2. To investigate this idea further, Berkeley Lab’s George Pau took advantage of the massively parallel computing capacity of NERSC to create the first-ever three-dimensional simulations exploring how sequestered CO2 and saline aquifers interact. Unprecedented in detail, these simulations—run in both 2-D and 3-D—will help scientists better predict the success of this kind of sequestration project.

The Institute for Advanced Architectures and Algorithms (IAA) was founded in 2008 to encourage co-design between architectures and applications in order to create synergy in their respective evolutions. In order to explore the larger architectural design space of Exascale systems, the IAA began developing simulator software called Structural Simulation Toolkit (SST). SST lets computer scientists evaluate the impact of architectural choices on application performance and will let computational scientists develop computer codes and algorithms for future supercomputers that are significantly different from today’s systems.

Role of Simulation in Co-design within IAA

The majority of the new DOE Co-Design Centers are incorporating the SST software into their overall plan for exploring the effects of different exascale node designs, memory structures, power requirements, and performance on their particular science algorithms. The modular structure of the SST, shown in the figure, allows Co-Design Centers to adapt the simulator software to their particular node requirements. The ongoing IAA project continues to develop and enhance exascale system simulator software that has the potential to benefit all the Co-Design Center communities.

A study conducted at the Argonne Leadership Computing Facility (ALCF) by INCITE researchers at Cornell University is the first to successfully simulate functionally important motions of special proteins that play a fundamental role in biological processes. The simulations were run using more than 6.2 million core-hours on the IBM Blue Gene/P at the ALCF. The insight gained from these simulations led to the formulation of a mechanism that provides a unifying explanation of known experimental facts.

The dynamics of proteins and mechanisms of protein folding and unfolding play key roles in a number of cell functions and in diseases such as cancer and the formation of amyloids, which contribute to Alzheimer’s, Parksinson’s and other diseases. Although the essential structure of a protein is entirely encoded in its amino-acid sequence, the actual folding process is assisted by special proteins called molecular chaperones. Heat shock proteins (Hsps) are essential molecular chaperones present certain types of cells in all organisms. The study, which used an Hsp70 molecular chaperone from E. coli as a model, focused on opening and closing conformations in the Hsp 70 chaperones, which could lead to a better understanding of protein folding, refolding or repair. Based on the results of the simulations, a plausible mechanism of interdomain communication has been proposed, which agrees with the information from chaperone opening/closing experiments.

An article by researchers at Pacific Northwest National Laboratory was featured as a cover story in the Journal of Computational Chemistry in January. The article, titled “Parallel implementation of γ-point pseudopotential plane-wave DFT with exact exchange,” was written by Eric J. Bylaska, Kiril Tsemekhman, Scott B. Baden, John H. Weare and Hannes Jonsson. The article reports highly parallel algorithms for implementing exact exchange into pseudopotential plane-wave DFT (density functional theory) programs. Experience has shown that many of the failures of standard DFT, such as low activation barriers, small band gaps, and failures to produce localized states, can be corrected by augmenting a fraction of exact exchange into standard DFT exchange correlation functions. This work was supported by the ASCR Multiscale Mathematics program, ASCR Petascale Tools Program and BES Geosciences program of the Office of Science in the Department of Energy.

Donghai Mei and Guang Lin at Pacific Northwest National Laboratory (PNNL) have developed a multiscale model to simulate the heterogeneous reactions in catalytic reactors by combining first-principles kinetic Monte Carlo (KMC) simulation with continuum computational fluid dynamic model. The developed multiscale model is employed to study the effects of heat and mass transfer on the heterogeneous reaction kinetics. The integrated computational framework consists of a surface phase where catalytic surface reactions occur and a gas-phase boundary layer imposed on the catalyst surface where the fluctuating temperature and pressure gradients exist. The surface phase domain is modeled by the site-explicit first-principles KMC simulation. The gas-phase boundary layer domain is described using a computational fluid dynamic model.

Different from other hybrid models, the heat and mass fluxes between two domains are directly coupled by the varying boundary conditions at each simulation time-step from the unsteady state reaction regime to the steady state reaction regime in the present model. The simulation results indicate that the limitation of heat and mass transfer in the surrounding environment over the catalyst could dramatically affect the observed macroscopic reaction kinetics under presumed operating reaction conditions. This work has been published at Catalysis Today.

Many scientific applications are modeled by partial differential equations. One of the most widely used approaches for solving such equations numerically is to decompose the problem domain into a discretized representation referred to as a “mesh.” The Mesh-Oriented datABase (MOAB) is a software component developed at Argonne National Laboratory for representing and evaluating mesh data. The newest release, MOAB 4.0, includes support for parallel mesh reading, writing, and communication. Also included are other important mesh-based capabilities: mesh-to-mesh solution transfer, fast ray tracing, and interfaces to key services such as parallel partitioning, mesh visualization and geometric modeling. The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB also is optimized for efficiency in space and time.

MOAB has been used, for example, as a bridge to couple results in multiphysics analysis and to link these applications with other mesh services for nuclear reactor simulation (Fig. 1).Fig. 1: Very High Temperature Reactor (VHTR) assembly mesh represented in MOAB.

Moreover, initial results indicate that the data abstraction in MOAB is powerful enough to handle many different kinds of mesh data found in applications involving geometric topology groupings and interprocessor interface representation.

Development of MOAB is supported by the DOE SciDAC Center for Interoperable Technologies for Advanced Petascale Simulations (ITAPS). MOAB is released under an open-source Lesser General Public License. The MOAB software can be used on a wide range of computing platforms, from workstations to clusters and high-end parallel systems such as the IBM Blue Gene/P and Cray computers, and has been demonstrated to scale to at least 16,000 processors.

Researchers from Argonne National Laboratory played a major role in this year’s SIAM Computational Science and Engineering (CS&E) conference, held in Reno, Nevada, Feb. 28–March 4. Modeling and simulation have become indispensable in attacking complex problems in CS&E; and Argonne staff participated both as organizers and as presenters at this key CS&E meeting.

Argonne session (co)organizers and chairs from the Mathematics and Computer Science (MCS) Division included the following:

Mihai Anitescu and Victor Zavala: “Optimization in Electric Power Systems”

In addition, over two dozen computer scientists, applied mathematicians, predocs, and postdocs from Argonne’s MCS Division gave presentations on topics ranging from parallel mesh generation, performance modeling, and optimization to numerical strategies and their applications in chemical plants and electromagnetic modeling.

Although big-rig trucks are essential to the country’s economy, they also extract a certain toll. These long-haul trucks average 6 miles per gallon or less and annually dump some 423 million pounds of CO2 into the environment. To improve the efficiency of these vehicles, BMI Corp. launched its SmartTruck program on a modest high-performance computing (HPC) cluster to tackle the design of new, add-on parts for long-haul 18 wheelers. We initially ran our simulations on an HPC cluster with 96 processors,” recalls BMI founder and CEO Mike Henderson. “We were unable to handle really complex models on the smaller cluster. The solutions lacked accuracy. We could explore possibilities but not run the detailed simulations needed to verify that the designs were meeting our fuel efficiency goals.”

To beef up its computing power, BMI applied for and received a grant through the ORNL Industrial HPC Partnerships Program for time on Jaguar. Its engineers are now creating the most complex truck and trailer model ever simulated using NASA’s Fully Unstructured Navier Stokes (FUN3D) application for computational fluid dynamics analysis. The team models half the tractor and trailer for simulation and analysis purposes, using 107 million grid cells in the process. To study yaw—what happens when the vehicle swerves—they mirror the grid and double it, using 215 million grid cells to accurately model the entire vehicle. BMI’s ultimate goal is to design a sleek, aerodynamic truck with a lower drag coefficient than that of a low-drag car and anticipated fuel efficiencies running as high as 50 percent. For each truck that can be adapted to get an additional 3 mpg, BMI estimates annual fuel savings of 4,500 gallons and $13,500 in costs.

On February 1, the Electronic Simulation Monitoring (eSiMon) Dashboard version 1.0 was released, allowing scientists to monitor and analyze their simulations in real-time. Developed by the Scientific Computing and Imaging Institute at the University of Utah, North Carolina State University, and ORNL, this window into running simulations shows results almost as they occur, displaying data just a minute or two behind the simulations themselves. The dashboard allows the scientists to worry about the science being simulated, rather than learn the intricacies of high-performance computing such as file systems and directories, an increasingly complex area as leadership systems continue to break the petaflop/s barrier.

According to team member Roselyne Tchoua of the OLCF, the package offers three major benefits for computational scientists: first, it allows monitoring of the simulation via the web. It is the only single tool available that provides access and insight into the status of a simulation from any computer on any browser; second, it hides the low-level technical details from the users, allowing the users to ponder variables and analysis instead of computational elements; and finally, it allows collaboration between simulation scientists from different areas and degrees of expertise. In other words, researchers separated geographically can see the same data simultaneously and collaborate on the spot. The “live” version of the dashboard is physically located at ORNL and can be accessed with an OLCF account at https://esimmon.ccs.ornl.gov. This version of the dashboard gives an overview of ORNL and National Energy Research Scientific Computing Center computers. Users can quickly determine which systems are up or down, which are busy and where they would like to launch a job. Users can also view the status of their running and past jobs as well as those of their collaborators.

However, a portable version of eSiMon is also available for any interested party, and the platform cuts across scientific boundaries so that the dashboard can be used for any type of scientific simulation. For information on acquiring and/or using the eSiMon dashboard, visit http://www.olcf.ornl.gov/center-projects/esimmon/.

Researchers at ORNL have demonstrated a simple fabrication approach for making a graphene supercapacitor geometry and device based on a 2D in-plane design and a solid polymer-gel electrolyte that achieves very high capacitive energy storage characteristics. This new, ultrathin design allows for the formation of an efficient electrical double layer (EDL) by allowing for more efficient utilization of the electrochemical surface area (carbons in both the basal and along the edges).
Furthermore, the use of a solid polymer-gel electrolyte (polyvinyl alcohol in phosphoric acid, PVA-H3PO4) serves as both an ionic electrolyte and an electrode separator, thereby making the new design rather unique: These devices are compact, ultrathin, flexible, and optically transparent (see Figure). From a practical standpoint, this new device geometry can be easily extended to other thin-film based supercapacitors, and adapted to various structural and hybrid designs for energy storage devices/applications.

With the recent availability of large amounts of atomically thin and flat layers of conducting materials such as graphene, new designs for thin film energy storage devices with improved perfor-mance have become possible. The work, which takes information obtained at the electron level for graphene and combines it with device engineering, a “electrons to device” approach, demonstrates how an “in-plane” geometry for ultrathin supercapacitors based on electrodes comprised of pristine graphene and multilayer reduced graphene oxide can effectively achieve high levels of energy storage capacity. The demonstrated all solid-state supercapacitors provide a prototype for a broad range of thin-film based energy storage devices. Results from the project were published in Nano Letters.

A theoretical technique developed at Oak Ridge National Laboratory is bringing supercomputer simulations and experimental results closer together by identifying common “fingerprints.” ORNL’s Jeremy Smith collaborated on devising a method – dynamical fingerprints --that reconciles the different signals between experiments and computer simulations to strengthen analyses of molecules in motion. The research will be published in the Proceedings of the National Academy of Sciences.

Experiments tend to produce relatively simple and smooth-looking signals, as they only 'see' a molecule’s motions at low resolution, according to Smith, who directs ORNL's Center for Molecular Biophysics and holds a Governor's Chair at the University of Tennessee. In contrast, data from a supercomputer simulation are complex and difficult to analyze, as the atoms move around in the simulation in a multitude of jumps, wiggles and jiggles. Reconciling these different views of the same phenomenon has been a long-standing problem. The new method solves the problem by calculating peaks within the simulated and experimental data, creating distinct “dynamical fingerprints.” The technique, conceived by Smith's former graduate student Frank Noe, now at the Free University of Berlin, can then link the two datasets.

PEOPLE:

Two Berkeley Lab Researchers Named 2011 Sloan Fellows

Per-Olof Persson and Koushik Sen from Berkeley Lab’s Computational Research Division have been awarded the prestigious Sloan Research Fellowship, given annually by the Alfred P. Sloan Foundation to scientists, mathematicians, and economists who are at an early stage of their research careers. The awardees will each receive a $50,000 grant over the next two years to pursue any line of research they choose.

John Shadid, a Distinguished Member of the Technical Staff at Sandia National Laboratories and a DOE AMR PI, will deliver an invited Semi-Plenary lecture at the March 22-25 16th Finite Elements in Flow Problems (FEF) Conference in Munich, Germany. Shadid’s talk is titled “Towards a Scalable Fully-implicit Fully-coupled Resistive MHD Formulation with Stabilized FE Methods.” Shadid will also deliver an invited lecture at the Mathematics Department of the University Erlangen-Nuremberg on March 17. Shadid will also work with collaborators in the department who are engaged in research on applied mathematics, implicit high-resolution methods on unstructured meshes, and plasma physics simulations.

On February 11, OLCF computational astrophysicist Bronson Messer described the impending death of a giant—a red giant star, that is. Messer spoke on “The Fate of the Martial Star: How Will Betelgeuse Die?” at the University of Tennessee–Knoxville’s science forum. He addressed recent articles speculating that Betelgeuse is set to explode at some point in 2012. “That is possible—just as possible as the star living for another 10 million years,” Messer said.

Messer, currently the OLCF’s acting director of science, gave a crash course in astrophysics to an audience of about 30 students, faculty, and scientifically inclined members of the public and then delved right into the difficulties of simulating an exploding star 20 times as massive as our sun. Messer showed the audience simulations he is running on the OLCF’s Cray XT5 Jaguar. For perspective on the difficulty of these three-dimensional simulations, Messer explained that they depend on complex equations involving general relativity, weak forces, nuclear kinetics, and much more. In a star’s evolution from stable to supernova, the celestial body exhausts its hydrogen, helium, neon, oxygen, and silicon resources and finally begins burning iron. The inner core of the star becomes increasingly dense till the point at which 1 cubic centimeter—about the size of a sugar cube—weighs about the same as humanity collectively.

FACILITIES/INFRASTRUCTURE:

ESnet Upgrades Network Performance Knowledge Base

ESnet’s network performance knowledge base, fasterdata.es.net, has been updated and reorganized. The goal of this site is to help users maximize wide-area network bulk data transfer performance by tuning the TCP settings for end hosts and by using file transfer tools that are designed to maximize network throughput. This site contains over 85 pages of information and advice, and gets over 3000 hits per week from all over the world. It is used by folks in all industries and R&E to improve their network performance and troubleshoot problems.

Oak Ridge National Laboratory’s Colony team reached a milestone this month by booting a new operating system kernel. Through successfully bringing up the advanced kernel on a Cray XT with a Seastar interconnect, the team paves the way for the next phase of performance and scalability testing. Unlike the typical Linux kernel which suffers performance drawbacks, the new kernel is designed to provide a full featured environment with excellent scalability on the world¹s most capable machines. As coordinated stop-lights are able to improve traffic flow, the Colony system software stack is able to co-schedule parallel jobs and thus remove the harmful effects of operating system noise or interference through the use of an innovative kernel scheduler. The kernel utilizes a high precision clock synchronization algorithm developed by the Colony team to provide federated nodes with a sufficient global time source for the required coordination. The Colony Project is led by ORNL computer scientist Terry Jones.

OUTREACH & EDUCATION:

ALCF Staff Mentor Middle-School Girls at 10th Annual IGED

Argonne hosted 76 6th-, 7th- and 8th-grade girls from Illinois, Indiana, and Wisconsin during its 10th Annual Introduce a Girl to Engineering Day (IGED) on February 24. Held in celebration of National Engineering Week, the event focused on introducing girls to engineering careers through hands-on activities and direct interaction with engineers and scientists.

The girls spent the day with Argonne mentors to explore their interests in math, science and engineering. A number of middle-school girls who previously attended IGED also shared their experiences with this year’s participants. The day opened with a career presentation—“Engineering is Fun!” An Engineering Expo featured chemistry and transportation technologies, magnets, geographical information systems, and the conversion of plants into biodiesel fuel, among other exhibits. The girls benefited from hands-on activities and experiments, such as designing a model car and creating materials. Sreeranjani Ramprakash, Technical Support Specialist, and Marta Garcia, Assistant Computational Scientist, at the Argonne Leadership Computing Facility (ALCF) served as mentors during the event.

On February 24, 76 6th-, 7th- and 8th-grade girls spent the day with Argonne mentors to explore their interests in math, science, and engineering during the 10th Annual Introduce a Girl to Engineering Day.

The Richard Tapia Celebration of Diversity in Computing Conference, to be held April 3–5 at the Fairmont Hotel in San Francisco, includes staff from Lawrence Berkeley and Lawrence Livermore national laboratories. Now in its 10th year, the Tapia conference has a tradition of providing a supportive networking environment for under-represented groups across the broad range of computing and information technology, from science to business to the arts to infrastructure.

The 2011 Tapia conference is chaired by David Patterson, a computer science professor at the University of California, Berkeley with a joint appointment in Berkeley Lab’s Computational Research Division. LBNL Computing Sciences Communications Manager Jon Bashor is the Tapia communications chair and Berkeley Lab staff scientist Tony Drummond is the student research poster chair. From LLNL, Tony Baylis is the registration chair. Both labs are also contributing financial support for the conference.

Gloria D’Azevedo, a senior at Oak Ridge High School, won first place in the Tennessee Junior Science & Humanities Symposium for her research on improving elimination orderings for tree decompositions. She was awarded a $2000 college scholarship and an all-expense paid trip to the national JSHS, where she will compete for additional scholarships. Her research was conducted at Oak Ridge National Laboratory with Blair D. Sullivan and Chris Groer as part of the DOE ASCR Applied Mathematics project “Scalable Graph Decompositions & Algorithms to Support the Analysis of Petascale Data,”

D’Azevedo showed computationally that incorporating graph parameters such as the number of second neighbors of a vertex into traditional degree- and fill-based algorithms for choosing an elimination ordering can lead to significantly lower tree-widths. The impact of tree-width on the complexity of tree-decomposition based graph analysis algorithms is exponential, so this new idea for improving orderings could lead to significant speed-up.

The Tennessee JSHS is an annual scientific research competition that gives selected high school students the opportunity to present original research in a public forum. This year’s competition was held Feb. 24-25 at the University of Tennessee-Knoxville. The JSHS is administered through the Academy of Applied Sciences in cooperation with leading research universities throughout the nation to promote and foster research and experimentation in the science, technology, engineering and mathematics (STEM) disciplines at the high school level. The JSHS is jointly sponsored by the research offices of the U.S. Departments of the Army, Navy, and Air Force with the aim of advancing the nation¹s scientific and technological progress by challenging and engaging students in the STEM disciplines.

Eighteen students from Kennedy High School’s TechFutures Academy in Richmond, Calif. paid a visit to NERSC on February 9. In addition to a brief introduction to supercomputers from NERSC’s User Services Group Lead, Katie Antypas, the students also got a lesson about how “Careers in Computer Science Save the World” from Associate Lab Director of Computing Sciences Kathy Yelick. One of the highlights of the visit was a tour of the facility’s computer room. The half-day field trip is part of a burgeoning outreach connection between Berkeley Lab Computing Sciences and the high school.

The OLCF and Argonne Leadership Computing Facility (ALCF) cohosted the first of three webinars on January 24, guiding researchers through the proposal process for earning time on the two facilities’ leadership-class supercomputers. The Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program is offering more than 1.6 billion computational hours in 2012 on the OLCF’s Jaguar and ALCF’s Intrepid systems. The webinars provide researchers with necessary information for writing a competitive proposal and using leadership-class systems, as well as an opportunity to ask questions of the computing facilities’ staffs.

The second webinar was March 21, and the third will be in May on a date to be determined. The INCITE program itself will accept applications from April 13 to June 30. Awards, on average, exceed 20 million hours. For a list of 2011 INCITE awards, see 2011 INCITE Awardees (746KB). Additional information on the INCITE program can be found at www.doeleadershipcomputing.org or by contacting INCITE@DOEleadershipcomputing.org.