Introduction

The most powerful system being currently operated is part of the North-German Supercomputing Alliance (HLRN). Consultants from physics and materials science, chemistry and bioinformatics as well as earth system research and engineering sciences provide advice to researchers from Berlin and the other member states of HLRN. The consultants support the efficient implementation of projects in both large-scale computing and data analysis, facilitating innovative research.

The HLRN-III Complex at ZIB

„Konrad“ – a Cray XC40/XC30 massively parallel supercomputer system is the main compute component of the HLRN-III complex operated at ZIB. It is complemented by Peta-scale file systems for on-line storage. For data archive a magnetic tape library with several tape robots in a separate high-security room is provided.

With the on-going unification of requirements in the HPC and Big Data world, these large-scale storage capacities providing an optimal bandwidth to data sets become vital for current and future workloads.

Storage Infrastructure for HLRN-III at a glance:

3.7 PByte

on-line storage capacity is available in two globally accessible parallel file systems (Lustre) offering a high bandwidth access to application data in batch jobs.

0.5 PByte

on-line storage capacity is provided in a globally accessible NAS appliance for permanent project data and program code development.

Peta-scale

tape library for archiving large data sets is operated independently by ZIB.

Transition Towards New Technologies

many-core CPUs, that introduce an new scale of parallelism per node, and

large-capacity non-volatile memory.

These technologies will offer opportunities for vastly more complex and resource demanding computational workflows, but their full benefits can only be exploited by radical code modernizations.

To support the transition of HLRN applications towards these foreseeable technologies, we operate a Cray Test and Development System (TDS) that provides latest Intel Xeon Phi many-core compute capabilities and Cray DataWarp nodes with non-volatile storage tightly integrated into the Cray Aries network.

The Cray Test & Development System (TDS) at a glance:

16 KNC nodes

each with a Intel Xeon Phi 5120D coprocessors (KNC) and a 10 core Intel Ivy Bridge host CPU, and with 32 GB memory.