Bull to Provide DKRZ with Supercomputer for Climate Research

May 12, 2014

PARIS, France, May 12 — The German computing center for climate research DKRZ (Deutsches Klimarechenzentrum) and Bull have signed a contract for the delivery of a petaflops-scale supercomputer, as well as cooperation on climate research simulation. The contract worth 26 million euro covers the delivery of all the key computing and storage components of the new system.

“What does the climate have in store for our future and for the Earth?” It’s a question that arouses a great deal of interest and controversy nowadays. To answer it, climate simulations are an essential tool. This involves replicating the climate system and its complex developments on a computer, with the help of digital models. The new system will be used to process the huge quantity of data (Big Data) needed to carry out effective climate simulation.

Despite its impressive technical performance, the global energy consumption of the system demonstrates exemplary energy efficiency, with a PUE as low as 1.2. The PUE value is the ratio between the global energy consumption of the Data Center and the actual energy consumption of the computer. This excellent result is a direct result of the technology developed by Bull for High-Performance Computing (HPC): the system being purchased by the Hamburg climate researchers will be cooled using warm water, a technology that requires significantly less energy than standard cooling systems, as the heat generated by processors and memory modules is extracted as close as possible to source. The system will also benefit from advances in energy consumption reductions, born out of a cooperative project between by Bull and the Technical University of Dresden.

“We are very proud that DKRZ has chosen Bull. Bull is a leading international provider of HPC solutions, and supports HPC research and education in Germany for customers including the Universities of Dresden, Cologne, Aachen, Düsseldorf and Munster, and the Jülich Research Center. The contract signed today with the German computing center for climate research is a new milestone in Bull’s HPC success story,” said Gerd-Lothar Leonhart, CEO of Bull for the DACH area (Germany/Austria/Switzerland).

“As part of the agreement signed today, DKRZ and Bull will cooperate to improve the scalability of climate models and the corresponding software algorithms. In climate simulation, we generate such enormous quantities of data that we not only need efficient hardware, but also highly efficient software, to get to grips with that data,” commented Professor Thomas Ludwig, Director of DKRZ and research team leader.

“We must be able to rely on supercomputers that incorporate the latest technological advances to be able to improve our climate forecasts. With the new system, for example, we hope to gain new insight into the forecasting of cloud formation,” explained Professor Dr. Jochem Marotzke, Director of the Max-Planck Institute of Meteorology, one of the main users of the DKRZ facilities.

The expertise in the optimization of software codes developed by Bull’s Parallel Programming team in Grenoble was a key factor in DKRZ’s decision.

“It is also this proven competence that finally convinced us that Bull was the right partner to have at our side,” Professor Ludwig added.

If the new system were fully installed today, it would rank among the five fastest supercomputers in Germany according to the current Top500 list. And the project breaks another record: its 45 Petabyte storage system is one of the largest in the world. DKRZ is setting new standards with the deployment of this outstanding infrastructure, specifically scaled to support its users’ scientific research programs.

For more information visit: www.dkrz.de

About Bull

Bull is the trusted partner for enterprise data. The Group, which is firmly established in the Cloud and in Big Data, integrates and manages high-performance systems and end-to-end security solutions. Bull’s offerings enable its customers to process all the data at their disposal, creating new types of demand. Bull converts data into value for organizations in a completely secure manner.

MapD Technologies, the GPU-accelerated database specialist, said it is working with university researchers on leveraging graphics processors to advance geospatial analytics.
The San Francisco-based company is collabor Read more…

By George Leopold

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Centers (IPCCs) has resulted in a new Big Data Center (BDC) that Read more…

By Linda Barney

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last week the cloud giant released deeplearn.js as part of that in Read more…

Can’t wait to see next week’s solar eclipse? You can at least catch glimpses of what scientists expect it will look like. A team from Predictive Science Inc. (PSI), based in San Diego, working with Stampede2 at the Read more…

By John Russell

Microsoft has acquired cloud computing software vendor Cycle Computing in a move designed to bring orchestration tools along with high-end computing access capabilities to the cloud. Terms of the acquisition were not disclosed. Read more…

By Tiffany Trader

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Alex Woodie

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By John Russell

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…