SAP Leonardo Machine Learning and the iOS SDK

SAP Leonardo Machine Learning and the iOS SDK

In this How-To tutorial, you will test SAP Leonardo's Machine Learning capabilities exposed through SAP API Hub and implement these in a native iOS application build with the SAP Cloud Platform SDK for iOS.

How-To Details

In this How-To tutorial, you will test SAP Leonardo’s Machine Learning capabilities exposed through SAP API Hub and implement these in a native iOS application build with the SAP Cloud Platform SDK for iOS. After you have finished the tutorial, you will have an app which allows you to take a picture with the camera, or choose an image from the Photos Library. This image will be sent to SAP Leonardo’s “Image Classification” Machine Learning API which is available for testing on SAP API Hub. The response will be a list of classifications for the submitted image, which you could use for further processing.

Step 1: Create Xcode application

Open Xcode and create a new project. Select Single View Application from the dialog and click Next.

In the next screen, enter the following properties:

Field

Value

Product Name

Demo

Team

<Select your team>

Organization Name

Organization Identifier

com.sap.tutorials.demoapps

Click Next to continue.

Specify a location to store your project and click Finish.

Your project is now generated.

Step 2: Add SAP Cloud Platform SDK for iOS framework files

In order to utilize the SAP Cloud Platform SDK for iOS capabilities, you need to include these to your Xcode project.

Using Finder, navigate to the location of the SDK’s framework files at ./<SDK Location>/Frameworks/Release-fat.

Select the following framework files:

SAPCommon.framework

SAPFiori.framework

SAPFoundation.framework

In Xcode, select the Demo project file at the root of the Project navigator and select the General tab. Scroll down to the Embedded Binaries panel.

Click on the tile APIs. A page with featured and latest APIs are displayed.

Click on the tile SAP Leonardo ML - Functional Services. Switch to the tab Artifacts. Here you see a list of all the functional SAP Leonardo Machine Learning APIs. For this tutorial, you are going to use the Image Classification API.

If you click the Image Classification API link, you navigate to a page displaying the available REST APIs. If you expand the /inference_sync service, you see extensive documentation how the request should look, the possible responses, and you can even test the service from within the API Hub.

To the right of the service, there’s a Generate Code link. Click on that link, and switch to the Swift tab. Here you see boilerplate code which you can use in your own application.

The generated boilerplate code uses generic Swift code, and not SAP Cloud Platform SDK for iOS optimized code. Although the generated code is fairly simple to use as-is in Swift 2.0 projects, this tutorial uses the SAP Cloud Platform SDK for iOS built on Swift 3.1 and as such the generated code needs to be changed significantly in your project.

.

If you test the service, you will receive a response like the following:

Variable classifications will hold the returned classifications for the submitted image; constant picker holds a reference to the image picker, and constant logger holds a reference to the Logger class in SAPCommon framework.

However, for the delegates and the logger to work, they need to be initialized first. Change the ImageClassifierTVC class viewDidLoad() method to the following:

Both actions call the view controller’s present method. However, in order for the Camera or Photo Library to be shown, a delegate function needs to be implemented. Add the following delegate to the class:

This resizes any image from the camera or the Photo Library to width of 600 pixels, and scales the height proportionally. Since the API Hub does not allow files exceeding 1 megabyte, in this particular case resizing the image to a smaller size is preferred over increasing the image compression.

If you now build run the app on a physical iOS device – it does not work on the Simulator since it has no camera – your app should look like this:

If you now click the Camera button, you are asked to give the app permission to use the camera:

Click OK, and take a picture. Click Use Photo, and the app navigates back to the initial screen. Nothing happens further, because you did not yet implemented the SAP Leonardo Image Classification API from the SAP API Hub yet. You will fix that in the next steps.

Step 13: Implement SAP Leonardo Image Classification API

Take a look at the generated code from the SAP API Hub:

An HTTP request is made with a couple of HTTP headers to the REST API endpoint https://sandbox.api.sap.com/ml/imageclassifier/inference_sync, and the returned response is then printed to the console as plain text. However, the generated code is written in Swift 2.0, uses the core Foundation instead of the SAP Cloud Platform SDK for iOS, and ideally you want the response in JSON format, not plain text.

In this method, first the required HTTP headers are created. The image is sent as multipart/form-data, and the response is retrieved in JSON format. The APIKey header expects your personal API Hub key, which can be retrieved from the API Hub by clicking the key icon in the top-right of the REST API page.

Because the request is sent as multipart/form-data, a boundary string needs to be constructed which will be used in the request body. This HTTP body is populated in method createBody.

The REST API URL is then set in the request and an URL session is created. After a successful response, the returned data is serialized to a JSON object. The classifications array will then be populated with the JSON object’s results node, and the table view is reloaded.

The above code isn’t written in the most beautiful way (no JSON object mapping to a Swift class, for instance) but this way it makes it easier to understand what’s going on in this method.

In this method, the FUIObjectTableViewCell with identifier classificationCell you created in step 10 is now populated with the value label as its classification, and the score is displayed as a percentage, indicating the confidence the classification matches the submitted image.

Because the table cell is now an SAP Fiori Object Cell, the table view’s row height needs to be adjusted. In method viewDidLoad(), add the following lines:

If you now build and run the application, everything should work end to end now. Take a picture of a single item, for instance these sunglasses:

If you now click the Use Photo button, the image is sent to the SAP API Hub REST endpoint, and if all goes well, you should see the JSON response in the console, and the table view is now populated with FUIObjectTableViewCell objects displaying the matching classifications, as well as the score as a confidence in percentage: