Biographical Sketch

Soon after she became the first woman ever to earn a PhD in Applied and Computational Mathematics at Princeton University, Alice Koniges joined NERSC as a member of the Computational Physics Group in 1984 at LLNL. She achieved the first successful parallel code run on the four-processor Cray-2. She began her career researching parallel computing and plasma physics, focusing on the combination of these two fields. Her expertise in the transition from vector to parallel computing culminated in her textbook Industrial Strength Parallel Computing, published by Morgan Kaufmann Publishers in January 2000. Alice joined the Berkeley Lab in 2009.

Her current research interests include exascale computing challenges, benchmarking and performance optimization of application codes, development of Adaptive Mesh Refinement (AMR) and Arbitrary Lagrangian Eulerian (ALE) algorithms for time-dependent PDE's, and application supercomputing in plasma physics, laser physics, and energy research. She regularly gives tutorials and short courses on application supercomputing. She served as Principal Investigator of the Computational Science and Engineering Petascale Initiative at LBNL. She is currently the application lead and co-PI on the eXascale Programming Environment and System Software or XPRESS project.

Previous to joining the Berkeley Lab, she held various positions at the Lawrence Livermore National Laboratory, including management of the Lab's institutional computing. She also led the effort to develop a new 3D multiscale multiphysics code (ALE-AMR) that is used to predict the impacts of target shrapnel and debris on the operation of the National Ignition Facility (NIF) the world's most powerful laser, and model Warm Dense Matter (WDM) experiments at the NDCX facility at LBNL. From 1995 to 1997, Alice led the Parallel Applications Technology Program at LLNL. This was the LLNL portion of the largest (12 million) CRADA (Cooperative Research and Development Agreement) ever undertaken by the Department of Energy. She spent 1998 at the Max-Planck Institute in Garching, Germany (Computer Center and Plasma Physics Institute), where she was a consultant to users at the Institute, assisting in the conversion of applications codes for parallel computers. In addition to her PhD she also holds MSE and MA degrees from Princeton, and a BA from the University of California, San Diego and has published approximately 100 refereed technical papers.

Modelling and mitigation of damage are crucial for safe and economical operation of high-power laser facilities. Experiments at the National Ignition Facility use a variety of targets with a range of laser energies spanning more than two orders of magnitude (~14 kJ to ~1.9 MJ). Low-energy inertial confinement fusion experiments are used to study early-time x-ray load symmetry on the capsule, shock timing, and other physics issues. For these experiments, a significant portion of the target is not completely vaporized and late-time (hundreds of ns) simulations are required to study the generation of debris and shrapnel from these targets. Damage to optics and diagnostics from shrapnel is a major concern for low-energy experiments. We provide the first full-target simulations of entire cryogenic targets, including the Al thermal mechanical package and Si cooling rings. We use a 3D multi-physics multi-material hydrodynamics code, ALE-AMR, for these late-time simulations. The mass, velocity, and spatial distribution of shrapnel are calculated for three experiments with laser energies ranging from 14 to 250 kJ. We calculate damage risk to optics and diagnostics for these three experiments. For the lowest energy re-emit experiment, we provide a detailed analysis of the effects of shrapnel impacts on optics and diagnostics and compare with observations of damage sites.

Accelerators have gained prominence as the next disruptive technology with a potential to provide a non-incremental jump in performance. However, the number of applications that have actually moved to accelerators is still limited because of many reasons, arguably the biggest of which is the gap in understanding between accelerator and application developers. This BoF is an application oriented session that aims to bring the two camps of application developers and accelerator developers head-to-head.

OpenCL is an open standard for programming heterogeneous parallel computers composed of CPUs, GPUs and other processors. OpenCL consists of a framework to manipulate the host CPU and one or more compute devices (CPUs, GPUs or accelerators), and a C-based programming language for writing programs for the compute devices. Using OpenCL, a programmer can write parallel programs that harness all of the resources of a heterogeneous computer. In this hands-on tutorial, we will introduce OpenCL. For ease of learning we will focus on the easier to use C++ API, but attendees will also gain an understanding of OpenCLs C API. The format will be a 50/50 split between lectures and exercises. Students will use their own laptops (Windows, Linux or OS/X) and log into a remote server running an OpenCL platform on a range of different processors. Alternatively, students can load OpenCL onto their own laptops prior to the course (Intel, AMD and NVIDIA provide OpenCL SDKs. Apple laptops with X-code include OpenCL by default). By the end of the course, attendees will be able to write and optimize OpenCL programs, and will have a collection of example codes to help with future OpenCL program development.

This panel will be a take-off on ABC's popular morning talk program. A lively format will be used to cover a number of controversial topics in the development and application of HPC. Four women from the HPC world will serve as "co-hosts," discussing topics with international experts associated with specific topics, such as Jean-Yves Berthou (European Exascale Software Initiative), Dave Turek (IBM), Ryan Waite (Microsoft), and Matt Fetes (venture capitalist). The goal is to air a variety of viewpoints in a lively and entertaining way. The panel will raise thought-provoking questions such as why the HPC community has such a hard time converging on standards, whether co-design is really affordable at HPC scales, why efficiency isn't our goal rather than scalability, whether exascale investments can really pay off, and why high-level languages haven't had real impact in HPC. Interactive polling will be used to involve the audience in charting a course for HPC's future - so be sure to bring your laptop or smartphone.

The current high-performance computing revolution provides opportunity for major increases in computational power over the next several years, if it can be harnessed. This transition from simply increasing the single-processor and network performance to a different architectural paradigms forces application programmers to rethink the basic models of parallel programming from both the language and problem division standpoints. One of the major computing facilities available to researchers in fusion energy is the National Energy Research Scientific Computing Center. As the mission computing center for DOE, Office of Science, NERSC is tasked with helping users to overcome the challenges of this revolution both through the use of new parallel constructs and languages and also by enabling a broader user community to take advantage of multi-core performance. We discuss the programming model challenges facing researchers in fusion and plasma physics in for a variety of simulations ranging from particle-in-cell to fluid-gyrokinetic and MHD models.

On Monday, June 15, 2009, the SciDAC 2009 conference sponsored an Electronic Visualization and Poster Night. Scientists involved in DOE Office of Science research, such as SciDAC, INCITE, and core-funded programs, were encouraged to submit an image or animation to be shown at this event. A DVD of those images and animations is attached to the inside back cover of this proceedings book.