The documents distributed by this server have been provided by the
contributing authors as a means to ensure timely dissemination of
scholarly and technical work on a noncommercial basis. Copyright and all
rights therein are maintained by the authors or by other copyright
holders, notwithstanding that they have offered their works here
electronically. It is understood that all persons copying this
information will adhere to the terms and constraints invoked by each
author's copyright. These works may not be reposted without the explicit
permission of the copyright holder.

Abstract

cuDNN is a low-level library that provides GPU
kernels frequently used in deep learning. Specifically, cuDNN implements
several equivalent convolution algorithms, whose
performance and memory footprint may vary considerably,
depending on the layer dimensions. When an algorithm is
automatically selected by cuDNN, the decision is performed on
a per-layer basis, and thus it often resorts to slower algorithms
that fit the workspace size constraints. We present µ-cuDNN,
a thin wrapper library for cuDNN that transparently divides
layers’ mini-batch computation into multiple micro-batches, both
on a single GPU and a heterogeneous set of GPUs. Based on
Dynamic Programming and Integer Linear Programming (ILP),
µ-cuDNN enables faster algorithms by decreasing the workspace
requirements. At the same time, µ-cuDNN does not decrease the
accuracy of the results, effectively decoupling statistical efficiency
from the hardware efficiency. We demonstrate the effectiveness of
µ-cuDNN for the Caffe and TensorFlow frameworks, achieving
speedups of 1.63x for AlexNet and 1.21x for ResNet-18 on
the P100-SXM2 GPU. We also show that µ-cuDNN achieves
speedups of up to 4.54x, and 1.60x on average for DeepBench’s
convolutional layers on the V100-SXM2 GPU. In a distributed
setting, µ-cuDNN attains a speedup of 2.20x when training
ResNet-18 on a heterogeneous GPU cluster over a single GPU.
These results indicate that using micro-batches can seamlessly
increase the performance of deep learning, while maintaining
the same overall memory footprint.