Contributors

Documentation

Horovod

Horovod is a distributed training framework for TensorFlow. The goal of Horovod is to make distributed Deep Learning
fast and easy to use.

Why not traditional Distributed TensorFlow?

The primary motivation for this project is to make it easy to take a single-GPU TensorFlow program and successfully train
it on many GPUs faster. This has two aspects:

How much modifications does one have to make to a program to make it distributed, and how easy is it to run it.

How much faster would it run in distributed mode?

Internally at Uber we found the MPI model to be much more straightforward and require far less code changes than the
Distributed TensorFlow with parameter servers. See the Usage section for more details.

In addition to being easy to use, Horovod is fast. Below is a chart representing the benchmark that was done on 32
servers with 4 Pascal GPUs each connected by RoCE-capable 25 Gbit/s network:

Horovod achieves 90% scaling efficiency for both Inception V3 and ResNet-101, and 79% scaling efficiency for VGG-16.
See the Benchmarks page to find out how to reproduce these numbers.

While installing MPI and NCCL itself may seem like an extra hassle, it only needs to be done once by the team dealing
with infrastructure, while everyone else in the company who builds the models can enjoy the simplicity of training them at
scale.

Install

This basic installation is good for laptops and for getting to know Horovod.
If you're installing Horovod on a server with GPUs, read the Horovod on GPU page.
If you want to use Docker, read the Horovod in Docker page.

Concepts

Horovod core principles are based on MPI concepts such as size, rank,
local rank, allreduce, allgather and broadcast. See here for more details.

Usage

To use Horovod, make the following additions to your program:

Run hvd.init().

Pin a server GPU to be used by this process using config.gpu_options.visible_device_list.
With the typical setup of one GPU per process, this can be set to local rank. In that case, the first process on
the server will be allocated the first GPU, second process will be allocated the second GPU and so forth.

Scale the learning rate by number of workers. Effective batch size in synchronous distributed training is scaled by
the number of workers. An increase in learning rate compensates for the increased batch size.

Wrap optimizer in hvd.DistributedOptimizer. The distributed optimizer delegates gradient computation
to the original optimizer, averages gradients using allreduce or allgather, and then applies those averaged
gradients.

Add hvd.BroadcastGlobalVariablesHook(0) to broadcast initial variable states from rank 0 to all other processes.
This is necessary to ensure consistent initialization of all workers when training is started with random weights or
restored from a checkpoint. Alternatively, if you're not using MonitoredTrainingSession, you can simply execute
the hvd.broadcast_global_variables op after global variables have been initialized.

Modify your code to save checkpoints only on worker 0 to prevent other workers from corrupting them.
This can be accomplished by passing checkpoint_dir=None to tf.train.MonitoredTrainingSession if
hvd.rank() != 0.

Running Horovod

The example commands below show how to run distributed training. See the Running Horovod
page for more instructions, including RoCE/InfiniBand tweaks and tips for dealing with hangs. See the
Horovod in Docker page for details about running Horovod in Docker.

Keras

Note: Keras 2.0.9 has a known issue that makes each worker allocate
all GPUs on the server, instead of the GPU assigned by the local rank. If you have multiple GPUs per server, upgrade
to Keras 2.1.2, or downgrade to Keras 2.0.8.

Estimator API

Horovod supports Estimator API and regular TensorFlow in similar ways.

Inference

Learn how to optimize your model for inference and remove Horovod operations from the graph here.

Tensor Fusion

One of the unique things about Horovod is its ability to interleave communication and computation coupled with the ability
to batch small allreduce operations, which results in improved performance. We call this batching feature Tensor Fusion.