Ways to Connect

GPU Compute Node Performance Improvments

We are replacing the network cards in all of our HPC GPU nodes with 10Gpfs fibre cards. This will increase throughput to the storage system, and eliminate performance bottlenecks.

We currently have eight GPU nodes in the HPC, and we expect that number to grow over time. Previously, these nodes had a 1Gbps copper network connection to the GPFS storage system.

We have recently purchased and are in the process of installing brand-new 10Gbps fibre network cards. This means a potential 10x increase in throughput and filesystem I/O for jobs running on these nodes.

There are several GPU nodes in our genacc_q partition that all RCC users have access to. Additionally, we will reach out to those of you who purchased GPU nodes in order to schedule a convenient time/date for the upgrade. Each upgrade should take about three hours.

If you want to try running GPU jobs on the HPC, we have a handy guide to help get you started! We support CUDA, TensorFlow, and many other GPU software applications.