Rock Stars of HPC: Thomas Schulthess

Our Rock Stars of HPC gallery is growing as we look to a new generation of heterogeneous computing. And when the opportunity came to us to name our first European Rock Star of HPC, one name kept coming up: Thomas Schulthess:

Thomas Schulthess is the director of the Swiss National Supercomputing Centre (CSCS) at Manno. He studied physics and earned his Ph.D. degree at ETH Zurich. As CSCS director, he will also be professor of computational physics at ETH.. He worked for twelve years at the Oak Ridge National Laboratory (ORNL) in Tennessee, a leading supercomputing and research in the US. Since 2002, he led “Computational Materials Science Group” with 30 co-workers.

Thomas Schulthess studied physics at ETH Zurich and earned his doctorate in 1994 with a thesis on metal alloys based on experimental data and supercomputing simulations. He subsequently continued his research activity in the US and published around seventy research papers in the best journals of his field. His present research interests are in the focused on the magnetic properties of metallic nano-particles (nano-magnetism). Using high-performance computing, he is studying the magnetic structures of metal alloys. Of particular interest are his studies on the giant magnetoresistance. He is also a two-time winner of the Gordon Bell Award.

insideHPC: You were schooled as a physicist. What got you interested in high performance computing?

Thomas Schulthess: Physics being the mother of modern science, it is not at all surprising that many researchers in this field are interested in high performance computing – I am no exception. In my particular domain, condensed matter physics and material science, we have a canonical model (the many-body Schrödinger equation) that suffers from the curse of dimensionality. We are therefore constantly looking for better algorithms and more powerful computers to solve the particular problems we are investigating. This is how I got interested in ever more powerful supercomputers, and we had a wave of machines developed at ORNL that helped us tremendously. But when you look around, haven’t most serious players in HPC been trained as physicists? One could even argue that physicists involved in the Manhattan Project started HPC.

insideHPC: You have been involved in so many milestone HPC activities in this community – what would you call out as one or two of the high points of your career – some of the things of which you are most proud?

Thomas Schulthess: The end-station for computational nanoscience we developed at the Center for Nanophase Materials Sciences (CNMS) at ORNL. We invested heavily in application and algorithm development, and now we have some of the best performing codes on petascale systems that are productive research tools in the user program of the nanocenter. Others have adopted the concept, e.g. the simulation labs in Jülich. In Switzerland we are developing version 2 of this concept, where we are pushing the HPC application development out into the research groups and communities that develop the models and application codes. The response from the application community that is now taking charge in 12 projects makes me confident about the sustained use of supercomputers as scientific instruments.

insideHPC: As Director of the Swiss National Supercomputing Centre you must have extensive administrative responsibilities. Do you still write code?

Thomas Schulthess: The responsibilities are of course much higher than in my previous job, but I have very competent staff to manage operations and I work for an institution with a lean administration that entrusts researchers with the leadership of projects. This means I have to find time to remain active in research and train graduate students. I am expected to develop the user community and the supercomputing strategy that meets their research needs. You have to be an active researcher to be credible for this job, that’s just the way science works in Switzerland. I don’t write big codes myself anymore, but I still lead teams who do – such codes have to be implemented by professionals who are fully committed to the job.

insideHPC: What are your thoughts on how we can attract the next generation of HPC professionals into the community – and provide them with the experience-based training that they will need to be successful?

Thomas Schulthess: We have to focus on the science and engineering problems we solve, discoveries we facilitate, and technologies created with HPC. We have to push productive HPC and maintain a high standard. This will make HPC interesting and attract bright young people to the field. At the same time we have to introduce HPC training into the computational science education at universities. HPC must become part of undergraduate and graduate curricula, rather than being limited to training courses given by computer centers when researchers need access to systems. Creating highly efficient and scalable simulations requires considering HPC from the very beginning of the thought process. This has to be reflected in education.

insideHPC: You were recently quoted as saying: “Given the remarkable interest in GPU technology from the Swiss computational science community, it is essential that CSCS adopt this technology into its high-end production systems soon.” Why is it essential for an institution like yours to adopt GPUs in a big way?

Thomas Schulthess: Application developers in Switzerland and elsewhere in Europe are rapidly adopting this technology. Supercomputing has to respond to this trend! Since CSCS has established a record in early adoption of new technologies in high-end computing systems, there are high expectations for us to look at GPU technology. At the same time, it is clear that we will only introduce GPU technology into our main production line of systems, if it can be used productively at scale. This is not yet the case, but I’m quite certain it will happen within a year or two.

insideHPC: What is your favorite way to spend time when you’re not working?

Thomas Schulthess: I spend all my time away from work with my family. We have two growing teenagers that are harder and harder to keep up with. I’m an outdoor person, I love skiing, hiking, sailing and more.

insideHPC: What motivates you? What is your passion?

Thomas Schulthess: Science! I am a physicist, a researcher, like many of my colleagues I love to create machines or systems that allow us to do new experiments and look at nature in ways we have not been able to before. In recent years I had a lot of fun collaborating with peers from other domains to push the envelope with simulation based science in areas outside my own.

insideHPC: You received the Gordon Bell Award for for attaining the fastest performance ever in a scientific supercomputing simulation of superconductors. Was this the same record-breaking code that recently scaled to 1.84 Petaflops on the Tianhe-1A system in China?

Thomas Schulthess: No, the runs on Tianhe-1A were done with a classical molecular dynamics code. DCA++, with which we set the 1.9 Petaflops record in 2009, uses totally different quantum Monte Carlo algorithms today. The efficiency and scalability of the new code is probably higher today, but most importantly, the new algorithms allow us to reach a level of precision not possible with the implementation used in 2008/9 and time to solution has been improved dramatically. In my field, algorithms are still improving faster than computer architectures do. This is why we have to introduce the knowledge about architecture into the communities that develop algorithms.

Resource Links:

Latest Video

Industry Perspectives

Christian Kniep from Docker Inc. gave this talk at the Stanford HPC Conference. "This talk will recap the history of and what constitutes Linux Containers, before laying out how the technology is employed by various engines and what problems these engines have to solve. Afterward, Christian will elaborate on why the advent of standards for images and runtimes moved the discussion from building and distributing containers to orchestrating containerized applications at scale." [Read More...]

White Papers

This white paper evaluates system requirements for next-generation platforms and explains why conventional solutions may no longer be able to meet these requirements effectively. The paper introduces the heterogeneous 3D system- in-package (SiP) technology featured in Intel Stratix 10 FPGAs and SoCs. This technology enables next-generation platforms by powering higher bandwidth, lower power, a smaller form factor, and increased functionality and fledibility. Stratix 10 FPGAs and SoCs feature 3D SiP-based transceivers across all densities.