Log In

Pawsey trials NVIDIA GPUs for Nimbus cloud

Kicks off early adopter program.

The Pawsey Supercomputing Centre will offer graphical processing unit (GPU) compute nodes for researchers on its Nimbus cloud and is calling for trial users.

A total of twelve NVIDIA Tesla V100 GPUs with 16 gigabytes of memory each will be installed in six HPE Apollo SX40 server nodes that are to be added to the Nimbus cloud at Pawsey.

Nimbus is an Ocata OpenStack deployment that makes Ubuntu virtual machines available to researchers.

Pawsey said the GPUs will be used to accelerate artificial intelligence, high performance computing and graphics jobs by giving researchers access to VMs with "more computational power" behind them.

Compared to general-purpose central processing units (CPUs), the massively parallel GPUs with 5120 CUDA and 640 Tensor cores per card offer a significant performance boost for AI and similar workloads.

A single GPU offers the performance of up to 100 CPUs, Pawsey said.

The NVIDIA GPUs offer between 7 and 7.8 tera floating point operations per second of single-precision performance.

For double-precision performance the figure is 14-15.7 TFLOPS per processor. The Tensor cores, which can accelerate Google's Tensorflow machine learning library, manage between 112 and 125 TFLOPS in a Tesla V100 GPU.

All rights reserved. This material may not be published, broadcast, rewritten or redistributed in any form without prior authorisation.Your use of this website
constitutes acceptance of nextmedia's Privacy Policy and
Terms & Conditions.