Controlling Raspberry PI powered Robot using Kitura

Full Stack Development with Kitura

Swift became fully open source (Apache 2.0 license) project and has gone extra miles, making the Swift run not only on Darwin devices but also on Linux and other platforms such as Android, raspberry-pi, mainframe boxes. All the heavy lifting is done by the Swift community to make Swift available on Linux and multiple platforms and is continuously growing. This means that Foundation API, Swift language, all the standard libraries, and Dispatch for concurrency is available on most of the platforms. Now there are about two dozens of framework available out there including Kitura from IBM, Perfect from PerfectlySoft, Vapor from Q theory built on Swift language.

Why Swift?

Kitura from IBM

Kitura is a server-side modular framework written in Swift language developed IBM. Swift language was mostly used to create iOS based applications for Apple App Store. The developer had to choose a different language for the backend programming such as Java, Python, Ruby etc., which the Swift based client would connect for CRUD operations. Kitura has made it easier for iOS developers to use one language for both frontend and backend programming and create a complete application.

Highlights of Kitura Framework

Modular and packaged based web framework

Scales out of the box and utilize Foundation APIs to create an app on macOS and Linux.

Swift is highly performant, secure and expressive and hence Kitura.

Easy deploy to cloud platforms like IBM Bluemix and Watson integration to create a Cognitive application.

Xcode support for faster and easier development

Yeoman generators to create and deploy your app in few minutes

Full Stack Demo

We will be creating a Remote for a Robot powered by Raspberry PI and using server-side Kitura as a middle ware. The Robot will also be integrated with Watson Text-To-Speech services. The remote will be able to control the robot for two things:

Enter a text for the robot to speak out

Choose a color for the robot to blink

Architecture

Architecture

The remote is built using swift and runs on iOS devices.The Robot Remote Control app communicates with Kitura based server using REST APIs that are exposed. The server has CRUD APIs as well as an API which takes the remote input and sends it to the IBM IoT platform. The API on the server receives the JSON data from the remote app and publishes as a MQTT messages to the IoT platform. The server uses Aphid MQTT client to publish messages to the IBM IoT platform to a topic. The robot is built using raspberry-PI. The LED and the speaker are connected to the raspberry-PI. The PI runs a nodejs application that is listening for an MQTT messages from the IoT platform in the same topic where the Kitura based server publishes the messages. Once it receives the MQTT messages, the text received is converted to speech using Watson developer cloud SDK and pipes it to the speaker that was installed in raspberry-PI. The raspberry PI also runs a Python code to send the right color signal to the led so that it blinks.

Generate Swift Server

You can generate the swift server in no time as shown in the diagram below. Once you have the scaffolding you can start to create your own routes and logic for the routes.

SwiftServer Yeoman Generator

Create Model Object

You can again use the generator to create the model object needed for the application as shown in the diagram.

SwiftServer Yeoman Model Generator

Generated Code

This will generate CRUD APIs for the model object and is located in Generated directory. The RobotContentResource holds the routers for the CRUD APIs that can be accessed. By default, the API will be exposed with path

1

2

/api/RobotContents

/api/RobotContents/:id

The project has Application.swift class which sets up the initial configuration for the server to start. It publicly creates router object for routes to be created, a ConfigurationManager to aggregate configuration properties from different sources, including command line arguments, environment variables, files, remote resources, and raw objects and a port at which the server would run. Once all set, the Main.swift class initializes the router and other configurations and runs the server using initialize() and run() method respectively.

The application uses HeliumLogger for logging, SwiftMetrics to display metrics about the server memory and cpu usages, CloudConfigurations to deploy to IBM Bluemix platform, and Kitura for server side development. These dependencies are managed from Swift Package Manager in Package.swift and any additional dependencies you want to use can be added here.

Build and Run

You can build and run the generated application from Xcode. The application listens at port 8080 by default. There are inbuilt dashboards that is generated part of the application.

Swift Metrics Dashboard – You will be able to monitor the incoming requests, the response times, CPU utilizations and Memory footprints.

Swagger API Explorer – You will be able to see all the APIs definitions and it also allows you to test the API using the dashboard

Swift Metrics Dashboard can be accessed using URL http://localhost:8080/swiftmetrics-dash/

Swift Metrics Dashboard

The generated application has API explorer using Swagger integrated, that can be accessed in the browser through URL http://localhost:8080/explorer/

Swagger API Explorer

Creating Routes

Let’s create our own route to receive POST requests from the Robot Remote Control UI. The Router() object has GET, PUT, POST, DELETE method to create your own routes which take PATH as a parameter and exposes request, response and next variables as in the below code:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

import Foundation

import Kitura

import LoggerAPI

import HeliumLogger

import Application

import SwiftyJSON

import Generated

privatelet path="/api/IoT/RobotContent"

do{

HeliumLogger.use(LoggerMessageType.info)

tryinitialize()

// create a route to send RobotContent to MQTT Platform

router.post(path){request,response,next in

Log.debug("POST \(path)")

guard let contentType=request.headers["Content-Type"],

contentType.hasPrefix("application/json")else{

response.status(.unsupportedMediaType)

response.send(json:JSON(["error":"Request Content-Type must be application/json"]))

returnnext()

}

guard case.json(let json)?=request.body else{

response.status(.badRequest)

response.send(json:JSON(["error":"Request body could not be parsed as JSON"]))

returnnext()

}

do{

let model=tryRobotContent(json:json)

//call remote service

tryRemoteRobotService().sendRobotContent(model:model)

//

response.send(json:model.toJSON())

next()

}catchleterrorasModelError{

response.status(.unprocessableEntity)

response.send(json:JSON(["error":error.localizedDescription]))

next()

}catch{

Log.error("InternalServerError during handleCreate: \(error)")

response.status(.internalServerError)

next()

}

}

tryrun()

}catchleterror{

Log.error(error.localizedDescription)

}

After receiving the request from the remote UI either you can save that to the database or in our case, the data is published to IoT platform as MQTT messages. The module used to publish MQTT message to IBM IoT platform is Aphid Client developed by IBM. The following class sends the JSON as MQTT messages to the IoT platform.

Once all of your code bases is ready, you can start the server from Xcode by editing the scheme selecting the project as executable and run. you will see that the project runs on port 8080 and can access those URLs mentioned above.

Running the Robot Remote Control

Clone the remote UI code from the GitHub repository from HERE: and run the code. Once you run the code you will see the following output UI. The remote UI basically sends the text and blinkColor to the swift server by accessing the REST-ful API http://locahost:8080/api/IoT/RobotContent

Swift Remote App

Setup IBM BlueMix

Setup IoT platform in IBM BlueMix using the console to get the credentials to use in your app

Setup Watson Text-To-Speech and save the credentials to use in your app

Creating Raspberry-PI powered Robot

We will be creating a Robot that can talk and blink the LED attached to it. This robot is powered by Raspberry-PI. For this we need

Next step is to write nodejs code that subscribes to the topic in IoT platform that the Swift server has published to. The following nodejs code connects to the IoT platform using an npm MQTT client, Watson developer SDK to connect to Watson Text-to-Speech services to convert text to sound waves.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

// instantiate our TJBot!

vartj=newTJBot(hardware,tjConfig,Credentials);

// turn the LED off

tj.shine('off');

mqtt_client.on('connect',function(){

mqtt_client.subscribe(channel,function(err,granted){

if(err){

console.log(err);

}

console.log(granted);

});

});

mqtt_client.on('message',function(topic,message){

varcontentForRobot=JSON.parse(message)

vartext=contentForRobot.text

varledColor=contentForRobot.blinkColor

console.log("Text: "+text);

console.log("blinkColor: "+ledColor);

//call watson api and play the sound.

if(text){

playTextToTheSpeaker(text);

}

// send signal to led to shine with the color

if(ledColor){

tj.shine(ledColor.toLowerCase())

}

});

consttextToSpeech=newTextToSpeechV1(

{

// if left unspecified here, the SDK will fall back to the TEXT_TO_SPEECH_USERNAME and TEXT_TO_SPEECH_PASSWORD

The following dependencies are required to properly use Watson developer cloud sdk to pipe the out put of Watson Text-To-Speech to the speaker.

Robot PI Dependencies

and following credentials from the Bluemix platform. Replace the details found in Credentials.js file.

1

2

3

4

5

6

7

8

9

module.exports={

IOT_API_KEY:'<IOT_API_KEY>',

IOT_AUTH_TOKEN:'<IOT_AUTH_TOKEN>',

IOT_ORG:'<IOT_ORG>',

IOT_DEVICE_TYPE:'<IOT_DEVICE_TYPE>',

IOT_DEVICE_ID:'<IOT_DEVICE_ID>',

TEXT_TO_SPEECH_USERNAME:'<TEXT_TO_SPEECH_USERNAME>',

TEXT_TO_SPEECH_PASSWORD:'<TEXT_TO_SPEECH_PASSWORD>'

}

Install the dependencies and Start the nodejs application using

1

2

npm install

node index.js

The application starts listening to the topic in the IoT platform and when the topic receives MQTT messages this nodejs application receives the JSON which calls the Watson developer clouds Text-to-Speech and routes the response to the speaker. You can hear what you have typed in the UI from the robot. The nodejs app also uses tjbot library to shine the led with the selected color. The selected color is a response of the topic. The following is an output of the topic that nodeJs application receives.

JavaScript

1

2

3

4

{

"text":"Hello there from Remote Client",

"blinkColor":"Green"

}

GITHUB

Please find the GitHub repository for all 3 components here. you just need to add the credentials and run each component to see the output.

You Might Also Like

Sanjeev Ghimire is passionate about problem-solving through technology. With more than a decade of software engineering experience ranging from fin-tech to healthcare, Sanjeev was also the CTO and co-founder of onTarget. These days, Sanjeev is a Developer Advocate with IBM focused on emerging technologies such as blockchain, IoT, and AR/VR. When not cranking away in Java, Swift, Python, or Scala, you can find Sanjeev at the gym, playing the drums, or catching an Arsenal F.C. (a.k.a. Gunners) soccer match.