High Performance & Research Computing

CCIT’s High-Performance Computing group maintains various HPC (“supercomputer”) resources and offers support for Mines faculty and students using using HPC systems in research efforts. The goal of the service is to help scientists do their science through the application of HPC.

High Performance & Research Computing

CCIT’s High-Performance Computing group maintains various HPC (“supercomputer”) resources and offers support for Mines faculty and students using using HPC systems in research efforts. The goal of the service is to help scientists do their science through the application of HPC.

Want to get started with supercomputing?

Supercomputing is an increasingly important part of engineering and scientific research.

Supercomputing is an increasingly important part of engineering and scientific research. Mines has a number of distinct, High Performance Computing platforms. Wendian is the newest.

Wendian came on line in the fall of 2018. It contains the latest generation of Intel processors, Nvidia GPUs, and OpenPower nodes with 82 compute plus 5 nodes with GPUs combined to over 350 TFLOPs. It also has 3 administration nodes, 6 file system nodes heading up 1152 Tbytes (raw) storage @ over 10 Gbytes/Sec;

Wendian runs the CentOS version 7 of linux. Parallel jobs are managed via the Slurm scheduler. The programming languages of choice include C, C++, Fortran, OpenMP, OpenACC, Cuda and MPI.

Cooling

The XO1132g and X01114GTS servers have on-board water cooling for the CPUs. These are all fed water from a cooling distribution unit, a CDU. This removes about 60% of the total heat generated. The water to the compute resources is in a closed loop. The CDU has a heat exchanger with the heat emitted by the closed loop warming chilled water from central facilities. Remaining heat from the servers and heat generated by the other nodes is removed via two in-row coolers. The equipment list includes (2) APC ACRC301S In-Row Coolers and a MOTIVAIR Coolant Distribution Unit MCDU25

Mio.Mines.Edu

Your Supercomputer

You have access to a 120-plus Tflop HPC cluster for student and faculty research use.

Want to get started with supercomputing? Supercomputing is an increasingly important part of engineering and scientific research. Mines provides an advanced supercomputing cluster called “Mio” for the use of students and faculty who wish to take advantage of this extraordinary high-performance computing resource.

For students

Students have already purchased some access to Mio with Tech Fee funds—usable for general research, class projects, and learning HPC techniques. Students may also at times use Mio nodes purchased by their academic advisor or other professors. The HPC Group offers assistance to students (and faculty) to get up and running on Mio. Individual consultations and workshops are available.

Hardware description

What’s in a name?

The name “Mio” is a play on words. It is a Spanish translation of the word “mine,” as in “belongs to me.” The phrase “The computer is mine” can be translated as “El ordenador es mío.”

Mines’ Big Iron Supercomputer

154 Tflops 17.4Tbytes 10,496 Cores 85KW

AuN/MC2 is a unique machine, composed of two distinct compute platforms or partitions that share a common file system. Both platforms, as well as the file system, were purchased from IBM as a package. The common file system shared between the partitions can hold 480 TB. It has efficient support for parallel operation (that is, multiple cores accessing it at the same time). The two compute platforms are optimized for different purposes.

AuN

The smaller compute platform, in terms of capability, is AuN (“Golden”). It is a traditional HPC platform using standard Intel processors. It contains 144 compute nodes connected by a high-speed network. Each node contains 16 Intel SandyBridge compute cores and 64 GB of memory for a total 2,304 cores and 9,216 GB of memory. AuN is rated at 50 Tflops. It is housed in two double-wide racks with 72 nodes in each rack. AuN is designed to run jobs that require more memory per core.

Mc2

Mc2 (“Energy”) is an IBM BlueGene. Mc2 is housed in a single large 4’ x 4’ rack, currently half full with room for expansion. The BlueGene computer is designed from the ground up as an HPC platform. It has a very-high-speed network connecting the nodes so applications can scale well. Each node has a processor dedicated to systems operations in addition to the 16 cores that are available for users. The processors on Mc2 are IBM “Power” processors. Mc2 has 512 compute nodes, each with 16 GB of memory for a total core count of 8,912 user cores and 8,912 GB of memory. Mc2 is rated at 104 Tflops. The total power consumption of the system is about 85 kW with only 35 kW used by Mc2. Mc2 is water cooled. AuN currently runs with rear door heat exchangers but could run with air cooling only. BlueM is housed at the National Renewable Energy Laboratory, in Golden, CO. Mc2 is designed for jobs that can make use of a large number of cores.