According to experts, detailed and accurate brain models are a game-changer for neuroscience and related research. What if instead of behavioral research, scientists could use the models to study cognition, simulate and study disease, and learn from the ways of the brain—everything from how it uses energy to how memory, representation and even consciousness itself are constructed?

Through the HBP, teams of scientists, doctors and researchers have partnered with supercomputing and data scientists to pursue these and other ambitious and lofty goals. Ultimately, this team aims to make major advances in fighting brain cancer, Alzheimer’s, CTE and other neurological disorders.

The human brain is organized on many different levels, from molecules to cells to small circuits and large circuits; and to really understand how all of these different levels are related, and also to understand what makes us human, is one of the biggest challenges of the 21st century,” said Prof. Dr. Katrin Amunts, Director of the Institute of Neuroscience and Medicine (INM) of Research Centre Jülich, Germany and Scientific Research Director of the Human Brain Project.

The project kicked off in 2013 and is scheduled to continue for a decade. “We have partners from about 24 European countries—people who come from physics, medicine, psychology,” Amunts said. Using the brains of donors embedded in paraffin, the researchers slice the brains into microscopic layers, recording the brains in great detail. “We have to create an ‘atlas’ (of the brain) that has a very large size in terms and bits and bytes,” Amunts said.

And that’s where her supercomputing colleagues come in.

“The Human Brain Project is a new approach of using supercomputers to understand the brain by modeling it in completely different ways,” said Prof. Dr. Dirk Pleiter, Research Group Leader at the Jülich Supercomputing Centre and Professor of Theoretical Physics at the Regensburg University. “We have to be able to store vast amounts of data for very fast access; to analyze the data, we have to allow for very quick visualizations with very quick turnarounds. We have to be able to schedule jobs in different ways than we did before, and then the supercomputer becomes more of a useful instrument than it was in the past,” said Pleiter. “Step by step, we are getting there,” he said.

Registration is now open for SC17, which takes place Nov. 12-17 in Denver.

Resource Links:

Latest Video

Industry Perspectives

Often, it’s not enough to parallelize and vectorize an application to get the best performance. You also need to take a deep dive into how the application is accessing memory to find and eliminate bottlenecks in the code that could ultimately be limiting performance. Intel Advisor, a component of both Intel Parallel Studio XE and Intel System Studio, can help you identify and diagnose memory performance issues, and suggest strategies to improve the efficiency of your code. [READ MORE…]

White Papers

This pioneering study focuses primarily on the virtual performance of throughput workloads. Download the new white paper from VMWare that explores the possibilities of virtualizing HPC throughput in computing environments.