Image Recognition with Tensorflow training on Kubernetes

The big picture

Modern Visual Recognition is done with deep neural networks (DNN). One framework (and I would say the most famous one) to build this kind of network is Tensorflow from Google. Being open source and specially awesome it is perfect to play around and build your own Visual Recognition System. As the compute power and specially the RAM memory raises there is now a chance of having much more complicated networks compared to the 90th where there where only one or two hidden layer.

One architecture is the Convolutional Neural Network (CNN). The idea is very close to brain structure. The basic idea is to intensively train a network on gazillions of images and let it learn features inside the many hidden layers. Only the last layer connects features to real categories. Similar to our brain the networks learns concepts and patterns but not really the picture groups.

After spending a lot of compute power to train these networks they can be easily reused to train new images by replacing only the last layer with a new one representing the to be trained categories. Training this network is only training the last connection between the last layer and the rest of the network. This training is extremely fast (only minutes) compared to month for the complete network. The charming effect is to train only the “mapping” from features to categories. This is what we are going now.

Basically the development of such a system can be divided into two parts. The first part (training) is described there. For the “use” aka classification have a look into the second part on my blog. I developed this system together with a good friend and colleague of mine. Check out Niklas Heidloff, here is his blog and twitter account. The described system has mainly three parts. Two docker containers described in this blog and one epic frontend described in Niklas blog. The source code can be found on github.

ImageNet

If you want to train a neural network (supervised learning) you need a lot of images in categories. Not ten or hundred but better hundred thousands or even 15 million pictures. A wonderful source for this is Imagenet. >14 million pictures organized in >20k categories. So a perfect source to train this kind of network. Google has done the same and participated in the Large Scale Visual Recognition Challenge (ILSVRC). Not only Google but many other research institutes build networks on top of Tensorflow in order have a better image recognition. The outcome are pre-trained models which can be used for system like we want to build.

Tensorflow for poets

Like always it is best to stand on shoulders of giants. So in our case use the python code developed by google at the codelabs. In this very fascinating and content full online training on Tensorflow Google developed python code to retrain the CNN and also to use the new trained model to classify images. Well, actually the training part is just using the original code and wraps it into a docker container and connects this container to an Object Store. So no much new work there but a nice and handy way to use this code for an own project. I highly recommend taking the 15 minutes and take the online training to learn how to use Tensorflow and Python.

MobileNet vs. Inception

As discussed there are many trained networks available the most famous ones are Inception and MobileNet. Inception has a much higher classification rate but also needs more compute power. Both on training and on classification. While we use kubernetes on “the cloud” the training is not a big problem. But we wanted to use the classifier later on on OpenWhisk we need to take care of the RAM memory usage. (512MB). The docker container can we configured to train each model but for OpenWhisk we are limited to the MobileNet.

Build your own classifier

As you can see in the picture we need to build two containers. The left one is loading the training images and the categories from an Object Store, trains the neural network and uploads the trained net back to the Object Store. This container can run on your laptop or somewhere in “the cloud”. As I developed a new passion for Kubernetes I added a small minimal yaml file to start the docker container on a Kubernetes Cluster. Well not really with multiple instances as the python code only uses one container but see it as some kind of “offloading” the workload.

The second container (will be described in the next article) runs on OpenWhisk and uses the pre-trained network downloaded from the Object Store.

The Dockerfile is straightforward. We use the Tensorflow docker image as base and install the git and zip (unpacking the training data) packages. Then we install all necessary python requirements. As all the Tensorflow related packages for Python are already installed these packages are only for accessing the Object Store (see my blog article). Then we clone the official github tensorflow-for-poets repository, add our execution shell script and finish with the CMD to call this script.

Execution Script

Shell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

#!/usr/bin/env bash

echo${TF_MODEL}

export OS_AUTH_URL=https://identity.open.softlayer.com/v3

export OS_IDENTITY_API_VERSION=3

export OS_AUTH_VERSION=3

swiftauth

swiftdownload${OS_BUCKET_NAME}${OS_FILE_NAME}

unzip${OS_FILE_NAME}-dtf_files/photos

python-mscripts.retrain\

--bottleneck_dir=tf_files/bottlenecks\

--how_many_training_steps=5000\

--model_dir=tf_files/models/\

--summaries_dir=tf_files/training_summaries\

--output_graph=tf_files/retrained_graph.pb\

--output_labels=tf_files/retrained_labels.txt\

--architecture=${TF_MODEL}\

--image_dir=tf_files/photos

cdtf_files

swift upload tensorflow retrained_graph.pb

swift upload tensorflow retrained_labels.txt

All important and sensitive parameters are configured via environment variables introduced by the docker container call. The basic and always the same parameters are set here. Where to do the keystone authentication and which protocol version for the Object Store. The swift commands downloads a zip file containing all training images in subfolders for each category. So you need to build a folder structure like this one:

Shell

1

2

3

4

5

.|

|-Category-A/

|-Category-B/

|-Category-C/

|-NegativeExampels/

The execution script unpacks the training data and calls the retrain script from Tensorflow-for-poets. Important parameters are how_many_training_steps (can be reduced to speed up for testing) and the architecture. As the last parameter can be changed depending on how accurate the classifier has to be and also how much memory is available for the classifier this parameter is also transferred via a command line parameter.

After building the docker container and pushing it to docker hub this yaml file triggers Kubernetes to run the container with the given parameters, many taken from your Object Store credential file:

1

2

3

4

5

6

7

8

9

10

11

12

VCAP={

"auth_url":"https://identity.open.softlayer.com",

"project":"object_storage_07xxxxxx_xxxx_xxxx_xxxx_6d007e3f9118",

"projectId":"512bfxxxxxxxxxxxxxxxxxxxxxxfe4e1",

"region":"dallas",

"userId":"4de3dxxxxxxxxxxxxxxxxxxxxxxx723b",

"username":"member_caeae76axxxxxxxxxxxxxxxxxxxxxxxxxxxxxx7d",

"password":"lfZxxxxxxxxxxxx.p",

"domainId":"151fxxxxxxxxxxxxxxxxxxxxxxde602a",

"domainName":"773073",

"role":"member"

}

OS_USER_ID -> VCAP[‘userId’]

OS_PASSWORD -> VCAP[‘password’]

OS_PROJECT_ID -> VCAP[‘projectId’]

OS_REGION_NAME -> VCAP[‘region’]

OS_BUCKET_NAME -> Up to you however you called it

OS_FILE_NAME -> Up to you, however you called it

TF_MODEL -> ‘mobilenet_0.50_{imagesize}’ or ‘inception_v3’

Use Object Store to store your trained class for later use

We decided to use Object Store to store our training data and also the re-trained network. This can be any other place as well, for example S3 on AWS or your local HDD. Just change the Dockerfile and exec file to download and upload your data correspondingly. More details on how to use the Object Store can be found in my blog article.

Related

One Reply to “Image Recognition with Tensorflow training on Kubernetes”