Book

Paddle On Kubernetes

In this article, we will introduce how to run Paddle training job on single CPU machine using Kubernetes. In next article, we will introduce how to run Paddle training job on distributed cluster.

Build Docker Image

In distributed Kubernetes cluster, we will use Ceph or other shared storage system for storing training related data so that all processes in Paddle training can retrieve data from Ceph. In this example, we will only demo training job on single machine. In order to simplify the requirement of the environment, we will directly put training data into Paddle's Docker Image, so we need to create a Paddle Docker image that already includes the training data.

Paddle's Quick Start Tutorial introduces how to download and train data by using script from Paddle's source code.
And paddledev/paddle:cpu-demo-latest image has the Paddle source code and demo. (Caution: Default Paddle image paddledev/paddle:cpu-latest doesn't include the source code, Paddle's different versions of image can be referred here: Docker installation guide), so we run this container and download the training data, and then commit the whole container to be a new Docker image.

Commit Docker Image

Use Kubernetes For Training

We will use Kubernetes job for training process, following steps shows how to do the training with Kubernetes.

Create Yaml Files

The output result in container will be demolished when job finished (container stopped running), so we need to mount the volume out to the local disk when creating the container to store the training result. Using our previously created image, we can create a Kubernetes Job, the yaml contents are as follows: