Bill Kramer

William (Bill) Kramer is the deputy project manager for the sustained-petascale Blue Waters project at Illinois’ National Center for Supercomputing Applications (NCSA). Formerly, he was the general manager of the Department of Energy’s National Energy Research Scientific Computing Center (NERSC). At NERSC and earlier, Kramer led the acquisition, testing, and introduction of more than 20 high-performance computing and storage systems. He was instrumental in managing the paradigm shift from vector computing to massively parallel systems and was one of the primary contributors to LBNL’s Science Driven Computer Architecture initiative.

Among Kramer’s outstanding accomplishments are deploying and operating large-scale computational, data systems, best of class facilities and leading intense, high visibility projects. He combines broad and significant technical contributions combined with leadership and management experience in high performance, interactive and real-time computing, data focused analysis, cyber infrastructure, applications and software development He has substantial and sustained expertise in managing world class, trend-setting organizations, a commitment to excellence, a record of fostering the education and development of the next generation of researchers and leaders and a track record for building sustained collaborations and relationships.

Kramer introduced project planning and metrics, negotiated multi-million dollar contracts, and led the effort to re-engineer LBNL’s computer support system. He was named one of HPCwire’s “People to Watch” in 2005 and was the General Chair for SC 05.

At NERSC and earlier, Kramer led the acquisition, testing, and introduction of more than 20 high-performance computing and storage systems. He was instrumental in managing the paradigm shift from vector computing to massively parallel systems and was one of the primary contributors to LBNL’s Science Driven Computer Architecture initiative. He introduced project planning and metrics, negotiated multi-million dollar contracts, and led the effort to re-engineer LBNL’s computer support system.

Previously Kramer was the general manager of the National Energy Research Scientific Computing Center (NERSC), the flagship computing facility of the Department of Energy’s Office of Science at Lawrence Berkeley National Laboratory (LBNL). Prior to Berkeley Lab, Kramer worked at the NASA Ames Research Center, where he was responsible for all aspects of operations and customer support for NASA’s Numerical Aerodynamic Simulator (NAS) supercomputer center and other large computational projects as well as starting a major Air Traffic Control Program. He worked at the University of Delaware and Inland Steel Corporation.

Blue Waters is the 20th supercomputer Kramer deployed and/or manages. Several were first of their kind, including the world’s first production UNIX supercomputer and the first production quality massively parallel system. In addition, he deployed and managed large clusters of workstations, five extremely large data repositories, some of the world’s most intense networks, and other extreme scale systems. He has also been involved with the design, creation and commissioning of six “best of class” HPC facilities.

Kramer holds a BS and MS in computer science from Purdue University, an ME in electrical engineering from the University of Delaware, a PhD in computer science at UC Berkeley and a number of professional certifications, including a Level II IT Project Manager.

His personal interests include scuba diving, water polo, kayaking, canoeing, and fly-fishing. An enthusiastic NY Yankee fan, he enjoys watching and playing baseball, softball and travel.

Bill’s Top 5 HPC initiatives or technologies to watch in 2013:

System evaluation, particularly extreme scale architectures. – The ever-widening gaps between the Top500 and sustained performance have led us to be unprepared for hitting the memory wall, etc.

Big Data rules. – While we have yet to define what Big Data really means, everyone will be responding to it.

Integration of two major approaches to Big Data/Storage. Traditional HPC file systems such as Lustre and GPFS and the web services type file system such as HDFS come from different assumptions and design philosophies, but both are critical to dealing with Big Data. Whether this is building HDFS features into the HPC parallel systems, or layering or one on the other, the use models will demand a more common set of technologies to handle Big Data.

Interconnect technologies – Cray and IBM appear to be betting on a future that does not include a special interconnect. Intel seems to be betting that interconnects will be part of their business. What’s the best path to pursue?

Automatic ways to use GPGPUs, accelerators and many core, such as OpenACC. While the early adaptors have moved to GPUs, the bar has to be lowered so the rest of the community can move to these technologies.