Content

TensorFlow Serving for Bitnami Cloud Hosting

Description

TensorFlow Serving is a system for serving machine learning models. This stack comes with Inception v3 with trained data for image recognition, but it can be extended to serve other models.

What is TensorFlow?

According to the TensorFlow web site, "TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well".

What is TensorFlow Serving?

TensorFlow Serving is "a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data". (credit: TensorFlow Serving web site).

What is Inception model?

The Inception model is one of the available TensorFlow models for image recognition. For more information check this Image Recognition tutorial.

First steps with the Bitnami TensorFlow Serving Stack

Welcome to your new Bitnami application! Here are a few questions (and answers!) you might need when first starting with your application.

What SSH username should I use for secure shell access to my application?

SSH username: bitnami

What are the default ports?

A port is an endpoint of communication in an operating system that identifies a specific process or a type of service. Bitnami stacks include several services or servers that require a port.

Remember that if you need to open some ports you can follow the instructions given in the FAQ to learn how to open the server ports for remote access.

Port 22 is the default port for SSH connections.

How to start or stop the services?

Each Bitnami stack includes a control script that lets you easily stop, start and restart services. The script is located at /opt/bitnami/ctlscript.sh. Call it without any service name arguments to start all services:

$ sudo /opt/bitnami/ctlscript.sh start

Or use it to restart a single service, such as Apache only, by passing the service name as argument:

$ sudo /opt/bitnami/ctlscript.sh restart apache

Use this script to stop all services:

$ sudo /opt/bitnami/ctlscript.sh stop

Restart the services by running the script without any arguments:

$ sudo /opt/bitnami/ctlscript.sh restart

Obtain a list of available services and operations by running the script without any arguments:

Click the "Connect" button and download the SSH key for your server in .ppk format (for FileZilla or WinSCP) or in .pem format (for Cyberduck).

Although you can use any SFTP/SCP client to transfer files to your server, this guide documents FileZilla (Windows, Linux and Mac OS X), WinSCP (Windows) and Cyberduck (Mac OS X).

Using an SSH Key

Once you have your server's SSH key, choose your preferred application and follow the steps below to connect to the server using SFTP.

FileZilla

IMPORTANT: To use FileZilla, your server private key should be in PPK format.

Follow these steps:

Download and install FileZilla.

Launch FileZilla and use the "Edit -> Settings" command to bring up FileZilla's configuration settings.

Within the "Connection -> SFTP" section, use the "Add keyfile" command to select the private key file for the server. FileZilla will use this private key to log in to the server.

Use the "File -> Site Manager -> New Site" command to bring up the FileZilla Site Manager, where you can set up a connection to your server.

Enter your server host name and specify bitnami as the user name.

Select "SFTP" as the protocol and "Ask for password" as the logon type.

Use the "Connect" button to connect to the server and begin an SFTP session. You might need to accept the server key, by clicking "Yes" or "OK" to proceed.

You should now be logged into the /home/bitnami directory on the server. You can now transfer files by dragging and dropping them from the local server window to the remote server window.

If you have problems accessing your server, get extra information by use the "Edit -> Settings -> Debug" menu to activate FileZilla's debug log.

WinSCP

IMPORTANT: To use WinSCP, your server private key should be in PPK format.

Follow these steps:

Download and install WinSCP.

Launch WinSCP and in the "Session" panel, select "SFTP" as the file protocol.

Enter your server host name and specify bitnami as the user name.

Click the "Advanced…" button and within the "SSH -> Authentication -> Authentication parameters" section, select the private key file for the server. WinSCP will use this private key to log in to the server.

From the "Session" panel, use the "Login" button to connect to the server and begin an SCP session.

You should now be logged into the /home/bitnami directory on the server. You can now transfer files by dragging and dropping them from the local server window to the remote server window.

If you need to upload files to a location where the bitnami user doesn't have write permissions, you have two options:

Once you have configured WinSCP as described above, click the "Advanced…" button and within the "Environment -> Shell" panel, select sudo su - as your shell. This will allow you to upload files using the administrator account.

Upload the files to the /home/bitnami directory as usual. Then, connect via SSH and move the files to the desired location with the sudo command, as shown below:

$ sudo mv /home/bitnami/uploaded-file /path/to/desired/location/

Cyberduck

IMPORTANT: To use Cyberduck, your server private key should be in PEM format.

Follow these steps:

Select the "Open Connection" command and specify "SFTP" as the connection protocol.

In the connection details panel, under the "More Options" section, enable the "Use Public Key Authentication" option and specify the path to the private key file for the server.

Use the "Connect" button to connect to the server and begin an SFTP session.

You should now be logged into the /home/bitnami directory on the server. You can now transfer files by dragging and dropping them from the local server window to the remote server window.

imagenet_train and imagenet_eval utilities: These are tools to train your own models. Please read the Inception model training guide if you want to know more about this.

TensorBoard: This is a web interface tool for monitorizing TensorFlow jobs. Find more information at the TensorBoard repository.

How to connect to TensorFlow Serving from a different machine?

For security reasons, the TensorFlow Serving port in this solution cannot be accessed over a public IP address. To connect to TensorFlow Serving from a different machine, you must either create an SSH tunnel or open port 9000 for remote access. Refer to the FAQ for more information on creating an SSH tunnel or opening server ports.

IMPORTANT: Making this application's network ports public is a significant security risk. You are strongly advised to only allow access to those ports from trusted networks. If, for development purposes, you need to access from outside of a trusted network, please do not allow access to those ports via a public IP address. Instead, use a secure channel such as a VPN or an SSH tunnel. Follow these instructions to remotely connect safely and reliably.

Once you have an active SSH tunnel or you opened the port for remote access, you can then connect to TensorFlow Serving using the Inception client a command like the one below. Replace SOURCE-PORT with the source port number specified in the SSH tunnel configuration or 9000 if you opened the port for remote access, and HOST with 127.0.0.1 if you have an SSH tunnel or the host's actual IP address otherwise.

$ inception_client --server=HOST:SOURCE-PORT --image=/tmp/example.jpg

How to change the TensorFlow Serving model configuration file?

TensorFlow Serving is ready to be used with the Inception v3 model. You may want to use a different version of the model or even a different one.

You can change the configuration settings of the model by editing the /opt/bitnami/tensorflow-serving/conf/tensorflow-serving.conf file:

How to compile example clients other than Inception?

NOTE: The Bitnami TensorFlow Serving Stack is configured to deploy the TensorFlow Inception Serving API. This image also ships other tools like Bazel or the TensorFlow Python library for training models. Training operations will require higher hardware requirements in terms of CPU, RAM and disk. It is highly recommended to check the requirements for these operations and scale your server accordingly.

As an example, this section describes how to compile and test the mnist utilities:

How to launch TensorBoard?

NOTE: The Bitnami TensorFlow Serving Stack is configured to deploy the TensorFlow Inception Serving API. This image also ships other tools like Bazel or the TensorFlow Python library for training models. Training operations will require higher hardware requirements in terms of CPU, RAM and disk. It is highly recommended to check the requirements for these operations and scale your server accordingly.

Execute the TensorBoard server:

$ tensorboard --logdir=path/to/log-directory

By default the port for the TensorBoard service is 6006. You will need to create an SSH tunnel to access it. Refer to the FAQ if you need help with this.

Where can I find utilities to train a model?

NOTE: The Bitnami TensorFlow Serving Stack is configured to deploy the TensorFlow Inception Serving API. This image also ships other tools like Bazel or the TensorFlow Python library for training models. Training operations will require higher hardware requirements in terms of CPU, RAM and disk. It is highly recommended to check the requirements for these operations and scale your server accordingly.

The imagenet_train and imagenet_eval utilities are already compiled in our stack. You can find them at /opt/bitnami/tensorflow-serving/bin.

How to enable NVIDIA GPU support?

NOTE: The steps below require you to download various libraries and recompile TensorFlow Serving with GPU support. Before proceeding, ensure that the host system has the necessary disk space, CPU and RAM to handle heavy compilation workloads.

To enable NVIDIA GPU support in TensorFlow Serving, follow these steps: