Facebook

Google

Twitter

InTheLoop | 11.09.2009

The weekly newsletter for Berkeley Lab Computing Sciences

November 9, 2009

NERSC Announces Selection of IBM iDataPlex Computer

NERSC’s next high performance computing system will be an IBM iDataPlex Linux cluster, which will replace Bassi and Jacquard in early allocation year 2010. The IBM system, selected in a competitive procurement, provides excellent performance, good energy efficiency per flop, and a familiar environment for mid-range parallel applications.

The system, named after the American scientist George Washington Carver, will consist of 3,200 computational cores, configured as 400 nodes with 2 quad-core Intel Nehalem 2.67 GHz processors and 24 GB memory per node. Carver’s peak performance will be 34.2 Tflops, which is about 3.5 times more powerful than Bassi and Jacquard. Its interconnect will be 4X QDR InfiniBand, configured as local fat trees with a global 2D mesh. Every node will have a fully featured Linux OS and all file systems (homes, project and scratch) will be hosted on the NERSC Global Filesystem.

An overview of the Carver configuration is available here. More details and plans for early user access will be announced later.

Berkeley Lab Booth at SC09 to Give Global Perspective

When SC09 convenes Nov. 14–20 in Portland, Berkeley Lab will incorporate an innovative display technology to show three-dimensional science and engineering data in 3D, without the use of special glasses or viewers, in Booth 723. The centerpiece of the display is a 24-inch Magic Planet globe made by Global Imagination. Berkeley Lab staff will use the globe to display global climate simulations, astrophysics research, and the growth and operation of ESnet.

Many Berkeley Lab researchers and technical staff are participating in SC09, as listed below. For a complete look at Berkeley Lab activities during SC09, go here.

Masterworks

Teresa Head-Gordon of LBNL/UC Berkeley will discuss “Big Science and Computing Opportunities: Molecular Theory, Models and Simulation” during the Masterworks Session on Multi-Scale Simulations in Bioscience to be held Wednesday, Nov. 18. Read the abstract.

Michael Wehner of LBNL’s Computational Research Division will talk about “Green Flash: Exascale Computing for Ultra-High Resolution Climate Modeling” as part of the Masterworks Session on Toward Exascale Climate Modeling held Thursday, Nov. 19. Read the abstract.

Tutorials

Hank Childs of Berkeley Lab and Sean Ahern of Oak Ridge National Laboratory will present “VisIt — Visualization and Analysis for Very Large Data Sets,” a tutorial on VisIt, an open source visualization and analysis tool designed for processing large data. The half-day session will be held on Sunday, Nov. 15. Read the abstract.

Alice Koniges of Berkeley Lab/NERSC, along with Rusty Lusk of Argonne National Laboratory and three others, will present “Application Supercomputing and the Many-Core Paradigm Shift,” a tutorial giving an overview of supercomputing application development with an emphasis on the many-core paradigm shift and programming languages. The full-day session will be held on Sunday, Nov. 15. Read the abstract.

Workshops

Kathy Yelick, Victor Markowitz, John Shalf, Shane Canon, Lavanya Ramakrishnan, Shreyas Cholia, and Keith Jackson of LBNL will contribute to “Using Clouds for Parallel Computations in Systems Biology” on Monday, Nov. 16. This all-day workshop aims to bring together computer scientists, bioinformaticists, and computational biologists to discuss the feasibility of using cloud computing for systems biology. Yelick and Markowitz will participate in a panel discussion on “Future Directions for Cloud Computing in Systems and Computational Bio.” Shalf, Canon, Ramakrishnan, Cholia, and Jackson will present a technical talk, “A Performance Comparison of Massively Parallel Sequence Matching Computations on Cloud Computing Platforms using mpiBLAST and Hadoop.” Markowitz will also chair an afternoon session of technical talks. Read the abstract.

Andrew Canning and Lin-Wang Wang are again co-organizing the 5th International Workshop on High Performance Computing for Nano-science and Technology (HPCNano09). The theme of this year’s workshop, to be held Sunday, Nov. 15, is “Cyber Gateway for Nano Discoveries and Innovation.” Read the abstract.

Technical Papers

David Pugmire and Sean Ahern of ORNL, Hank Childs and Gunther Weber of LBNL, and Christoph Garth of UC Davis will present their paper “Scalable Computation of Streamlines on Very Large Datasets” during the Large-Scale Applications session on Tuesday, Nov. 17. Read the abstract.

Marghoob Mohiyuddin, James Demmel and Kathy Yelick of LBNL/UC Berkeley, and Mark Hoemmen of UC Berkeley will present a paper on “Minimizing Communication in Sparse Matrix Solvers” as part of the Sparse Matrix Computation session on Tuesday, Nov. 17. Read the abstract.

Marghoob Mohiyuddin of LBNL/UC Berkeley, Mark Murphy and John Wawrzynek of UC Berkeley, and Leonid Oliker, John Shalf, and Samuel Williams of LBNL will present the paper “A Design Methodology for Domain-Optimized Power-Efficient Supercomputing” during the Future HPC Architectures session on Thursday, Nov. 19. Read the abstract.

Panel Discussion

William Tschudi of LBNL and Steve Elbert of PNNL will be among the members of a panel discussion on “Energy Efficient Data Centers for HPC, How Lean and Green Do We Need to Be?” to be held on Thursday, Nov. 19. Read the abstract.

Birds of a Feather Sessions

William Tschudi of LBNL will lead a BoF for the Energy Efficient High Performance Computing Working Group on Thursday, Nov. 19. Read more about the session.

Jon Dugan of LBNL/ESnet will lead a BoF on Network Measurement on Wednesday, Nov. 18. Read more about the session.

Exhibitor Forum

Erich Strohmaier and Horst Simon will participate in the TOP500 Supercomputers session presenting the 34th edition of this twice-yearly list on Tuesday, Nov. 17. Read more about this session.

Special Exhibit

Berkeley Lab’s Bill Tschudi is participating in the “Datacenter of the Future” exhibit in the lobby of the Oregon Convention Center. This booth showcases design elements of energy efficient HPC datacenters from diverse locations around the globe. Read more about this exhibit.

ACM Gordon Bell Prize Finalist

Berkeley Lab Associate Lab Director Horst Simon is a member of an IBM team making the finals for the 2009 ACM Gordon Bell Prize with their entry “The Cat is Out of the Bag: Cortical Simulations with 109 Neurons, 1013 Synapses.” Read more about this project.

User Groups

At the SPXXL IBM User Group Meeting on Nov. 16–17, David Paul and Jeff Broughton will present a NERSC site update and “Magellan — Building a Science Cloud.”

At the Cray User Group XTreme SIG Meeting on Sunday, Nov. 15, James Craw will discuss job failure analysis/job completion metric, DVS, and other topics.

Booth 723 Activities

Over the past few years, Berkeley Lab’s booth has emphasized our most valuable computing, networking and scientific resources—our world-class roster of recognized experts. Many of our best-known staff members will again be holding “office hours” in our booth, waiting to exchange ideas or answer questions. No canned presentations, just useful information.

Bandwidth Challenge Highlights DOE’s Science Services

It typically takes about two days to move 10 terabytes of climate data between DOE computing facilities. But on Nov. 17 a collaboration of researchers and engineers from Argonne, Lawrence Berkeley and Lawrence Livermore National Laboratories will attempt to transfer more data than this in approximately two hours. The occasion is the annual SC09 Bandwidth Challenge. Learn more.

ESnet to Support Two Teams in SC09 Bandwidth Challenge

Two teams in SC09’s Bandwidth Challenge will be transporting terabytes of data across ESnet over a period of several hours. To ensure that the data arrives within the challenge timeframe, the teams used ESnet’s OSCARS to reserve bandwidth on its Science Data Network (SDN). Learn more.

High Energy Physics Computing Requirements Workshop This Week

The Large Scale Computing and Storage Requirements for High Energy Physics Workshop is being held Thursday and Friday, Nov. 12–13, in Rockville, MD. This workshop is being organized by the Department of Energy’s Office of High Energy Physics and Office of Advanced Scientific Computing Research to elucidate computing requirements for high energy physics research at NERSC.

These requirements will serve as input to the NERSC planning processes for systems and support, and will help ensure that NERSC continues to provide world-class support for scientific discovery to DOE scientists and their collaborators. The tangible outcome of the workshop will be a report that includes the computing, storage, data, and support requirements and a supporting, science-based narrative.

Protect Intellectual Property When Distributing Software

DOE policy promotes the dissemination of DOE-developed software wherever appropriate. However, Lab-developed code should never be posted to publicly accessible Web sites without prior authorization by the Lab's Technology Transfer and Intellectual Property Management Department.

If you develop software, please review the recently updated Rules for Publishing and Distributing Software at Berkeley Lab. There are no major changes to procedures, but there are new helpful tips about when to contact the Tech Transfer Department as well as guidelines for incorporating third party software into your code when necessary.

This Week’s Computing Sciences Seminars

HIPS: A Parallel Hybrid Direct/Iterative Solver Based on a Schur Complement Approach

Nowadays, three dimensional numerical simulations often require a tremendous amount of resources. On one hand, direct methods can be mandatory to solve very ill-conditioned systems. But for large 3D simulations, they are constrained by prohibitive memory requirements and they also need a high amount of floating point operations. Iterative methods on the other hand require much less memory and are more scalable in general.

Hybrid methods based on a Schur complement approach try to combine the assets of the two class of methods. A usual method is to use a decomposition of the matrix graph into subdomains. The matrix part corresponding to the interior unknowns is treated using a direct method, and the resolution of the global system is then reduced to the resolution of the Schur complement system. The Schur complement can then be solved using an iterative method.

In the HIPS library, we have developed such an approach. The cornerstone of our method is to use a special decomposition and ordering of the matrix that allows us to construct a reduced system and a robust preconditioner at low memory cost. In this talk, I will present the hybrid method and compare two ways to compute the incomplete factorization of the Schur complement. I will also present a parallelization scheme based on using several domains per processor. I will also give some results for large difficult test cases on a large number of processors.

Forest Disturbance and the Earth System Carbon Sink

Ecophysiological processes operating at a variety of scales cause terrestrial ecosystems to act as net sinks for atmospheric CO2, mitigating a large portion of fossil fuel carbon emissions. Mechanisms controlling the strength of this sink are key features of the coupled carbon-climate system. Plot-based studies have suggested that CO2 fertilization of old-growth tropical forests may account for up to 50% of the terrestrial sink. However, physiological limitations on plant response to CO2 indicate a much lower sink potential for tropical forests. At the landscape scale, forest disturbance and recovery processes act as key constraints on regional carbon balance. This seminar will focus on ecophysiological processes controlling forest ecosystem-atmosphere carbon exchange across multiple scales. A synthesis of extensive field measurements and simulation modeling will be used to explore the potential sensitivity of old-growth tropical forests to rising CO2. Extensive remote sensing analysis and forest inventory data will be employed to study links between tree mortality disturbance and landscape carbon balance. Specific examples from blowdowns in the Amazon, and Hurricane Katrina impacts on Gulf Coast forests, will be illustrated. The importance of forest succession and plant functional types in Earth system models will also be discussed.

Chip architectures such as Nvidia G80 initiated the era of massively parallel general purpose computing on the client. Fueling the economic fire for such high-performance chips are interactive, client application domains such as gaming that are hungry for performance. Emerging applications in vision, imaging, video processing, virtual immersion, and robotics also have an insatiable need for speed, and provide a future performance roadmap for such many-core chips.

In the Rigel Project, we are developing a scalable architecture with 1000s of cores, and many Tflops of peak performance. Rigel has a well-defined and general-purpose programmer interface that enables a broad class of task and data parallel applications to be mapped efficiently to the chip. In this talk I will describe the major results of the project thus far, touching on subjects such as scalable cache coherence through hardware and software, the Rigel task-based parallel programming model, area-power-performance tradeoffs for throughput-oriented architectures, and parallel programming tools.

Link of the Week: Seven Questions That Keep Physicists Up at Night

It’s not your average confession show: a panel of leading physicists spilling the beans about what keeps them tossing and turning in the wee hours. But that was the scene in front of a packed auditorium at the Perimeter Institute, in Waterloo, Canada, when a panel of physicists was asked to respond to a single question: “What keeps you awake at night?” The discussion was part of the Quantum to Cosmos Festival, held October 15–25.

While most panelists professed to sleep very soundly, New Scientist reports on seven key conundrums that emerged during the session, which can be viewed here.

About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.