Tutorial 3: Classify Iris: Deploy a model

In this article

Azure Machine Learning (preview) is an integrated, end-to-end data science and advanced analytics solution for professional data scientists. Data scientists can use it to prepare data, develop experiments, and deploy models at cloud scale.

This tutorial is part three of a three-part series. In this part of the tutorial, you use Machine Learning (preview) to:

Download the model pickle file

In the previous part of the tutorial, the iris_sklearn.py script was run in the Machine Learning Workbench locally. This action serialized the logistic regression model by using the popular Python object-serialization package pickle.

Open the Machine Learning Workbench application. Then open the myIris project you created in the previous parts of the tutorial series.

After the project is open, select the Files button (folder icon) on the left pane to open the file list in your project folder.

Select the iris_sklearn.py file. The Python code opens in a new text editor tab inside the workbench.

Review the iris_sklearn.py file to see where the pickle file was generated. Select Ctrl+F to open the Find dialog box, and then find the word pickle in the Python code.

This code snippet shows how the pickle output file was generated. The output pickle file is named model.pkl on the disk.

When you ran the iris_sklearn.py script, the model file was written to the outputs folder with the name model.pkl. This folder lives in the execution environment that you choose to run the script, and not in your local project folder.

a. To locate the file, select the Runs button (clock icon) on the left pane to open the list of All Runs.

b. The All Runs tab opens. In the table of runs, select one of the recent runs where the target was local and the script name was iris_sklearn.py.

c. The Run Properties pane opens. In the upper-right section of the pane, notice the Outputs section.

d. To download the pickle file, select the check box next to the model.pkl file, and then select Download. Save the file to the root of your project folder. The file is needed in the upcoming steps.

Get the scoring script and schema files

To deploy the web service along with the model file, you also need a scoring script. Optionally, you need a schema for the web service input data. The scoring script loads the model.pkl file from the current folder and uses it to produce new predictions.

Open the Machine Learning Workbench application. Then open the myIris project you created in the previous part of the tutorial series.

After the project is open, select the Files button (folder icon) on the left pane to open the file list in your project folder.

Select the score_iris.py file. The Python script opens. This file is used as the scoring file.

To get the schema file, run the script. Select the local environment and the score_iris.py script in the command bar, and then select Run.

This script creates a JSON file in the Outputs section, which captures the input data schema required by the model.

Note the Jobs pane on the right side of the Project Dashboard pane. Wait for the latest score_iris.py job to display the green Completed status. Then select the hyperlink score_iris.py for the latest job run to see the run details.

On the Run Properties pane, in the Outputs section, select the newly created service_schema.json file. Select the check box next to the file name, and then select Download. Save the file into your project root folder.

Return to the previous tab where you opened the score_iris.py script. By using data collection, you can capture model inputs and predictions from the web service. The following steps are of particular interest for data collection.

Review the code at the top of the file, which imports class ModelDataCollector, because it contains the model data collection functionality:

from azureml.datacollector import ModelDataCollector

Review the following lines of code in the init() function that instantiates ModelDataCollector:

Review the following lines of code in the run(input_df) function as it collects the input and prediction data:

inputs_dc.collect(input_df)
prediction_dc.collect(pred)

Now you're ready to prepare your environment to operationalize the model.

Prepare to operationalize locally

Use local mode deployment to run in Docker containers on your local computer.

You can use local mode for development and testing. The Docker engine must be running locally to complete the following steps to operationalize the model. You can use the -h flag at the end of each command to show the corresponding help message.

Note

If you don't have the Docker engine locally, you can still proceed by creating a cluster in Azure for deployment. You can keep the cluster for re-use, or delete it after the tutorial so you don't incur ongoing charges.

Note

Web services deployed locally do not show up in Azure Portal's list of services. They will be running in Docker on the local machine.

Open the command-line interface (CLI).
In the Machine Learning Workbench application, on the File menu, select Open Command Prompt.

Create the environment. You must run this step once per environment. For example, run it once for the development environment, and once for production. Use local mode for this first environment. You can try the -c or --cluster switch in the following command to set up an environment in cluster mode later.

The following setup command requires you to have Contributor access to the subscription. If you don't have that, you need at least Contributor access to the resource group that you are deploying to. In the latter case, you need to specify the resource group name as part of the setup command by using the -g flag.

Follow the on-screen instructions to provision a storage account for storing Docker images, an Azure container registry that lists the Docker images, and an Azure Application Insights account that gathers telemetry. If you use the -c switch, the command will additionally create a Container Service cluster.

The cluster name is a way for you to identify the environment. The location should be the same as the location of the Model Management account you created from the Azure portal.

To make sure that the environment is set up successfully, use the following command to check the status:

After the setup finishes, use the following command to set the environment variables required to operationalize the environment. Use the same environment name that you used previously in step 2. Use the same resource group name that was output in the command window when the setup process finished.

To verify that you have properly configured your operationalized environment for local web service deployment, enter the following command:

az ml env show

Now you're ready to create the real-time web service.

Note

You can reuse your Model Management account and environment for subsequent web service deployments. You don't need to create them for each web service. An account or an environment can have multiple web services associated with it.

The following switches are used with the az ml service create realtime command:

-f: The scoring script file name.

--model-file: The model file. In this case, it's the pickled model.pkl file.

-s: The service schema. This was generated in a previous step by running the score_iris.py script locally.

-n: The app name, which must be all lowercase.

-r: The runtime of the model. In this case, it's a Python model. Valid runtimes are python and spark-py.

--collect-model-data true: This switch enables data collection.

-c: Path to the conda dependencies file where additional packages are specified.

Important

The service name, which is also the new Docker image name, must be all lowercase. Otherwise, you get an error.

When you run the command, the model and the scoring files are uploaded to the storage account you created as part of the environment setup. The deployment process builds a Docker image with your model, schema, and scoring file in it, and then pushes it to the Azure container registry: <ACR_name>.azureacr.io/<imagename>:<version>.

The command pulls down the image locally to your computer and then starts a Docker container based on that image. If your environment is configured in cluster mode, the Docker container is deployed into the Azure Cloud Services Kubernetes cluster instead.

As part of the deployment, an HTTP REST endpoint for the web service is created on your local machine. After a few minutes, the command should finish with a success message. Your web service is ready for action!

To see the running Docker container, use the docker ps command:

docker ps

Create a real-time web service by using separate commands

As an alternative to the az ml service create realtime command shown previously, you also can perform the steps separately.

First, register the model. Then generate the manifest, build the Docker image, and create the web service. This step-by-step approach gives you more flexibility at each step. Additionally, you can reuse the entities generated in previous steps and rebuild the entities only when needed.

Register the model by providing the pickle file name.

az ml model register --model model.pkl --name model.pkl

This command generates a model ID.

Create a manifest.

To create a manifest, use the following command and provide the model ID output from the previous step:

Run the real-time web service

To test the irisapp web service that's running, use a JSON-encoded record containing an array of four random numbers.

The web service includes sample data. When running in local mode, you can call the az ml service usage realtime command. That call retrieves a sample run command that you can use to test the service. The call also retrieves the scoring URL that you can use to incorporate the service into your own custom app.