Getting Connected with Google Home Using API.AI & Talend

Getting Connected with Google Home Using API.AI & Talend

"OK Google, what can you do when connected to Talend?" In this tutorial, I will show how to create an Agent in API.AI that will respond to commands spoken to Google Home. The Agent will reverse the words in a sentence spoken to Google Home by making use of a Talend web service which is used to carry out the word reversal. A very simple example, but it demonstrates the ground work you will need to create some really quite interesting applications. You do not need one to try this tutorial out as Google has provided an emulator, but I can highly recommend the device.

Recently Google opened up access to the Actions on Google API. You can either use the Actions SDK or use API.AI. API.AI was recently acquired by Google. While API.AI is really quite simple to use, it is quite limited in how it can be used with Google Home at the moment. However, it does allow you to call out to your own web services. This simple piece of functionality massively opens doors when used with a tool like Talend. In fact, I think the possibilities are almost endless when you factor in Talend to the mix. I

To start with, we will look at creating a few basic agents using API.AI.

Creating an Agent

This is essentially a Talend tutorial, so I won't be spending too much time going into great detail about the full workings of API.AI. There are plenty of tutorials out there that do that. I am hoping I can give enough information in this tutorial to enable you to extrapolate from this and other tutorials to build your own Agents.

Step-by-Step

1) The first thing you need to do is to sign up to API.AI if you haven't already. Since I use Google Chrome as my browser and I am always logged in to Google, this process is made very simple. I just clicked "Sign Up Free" and was presented with a standard authentication screen. As seen below...

Click "Allow" (if you are happy with what you are agreeing to) and you will be ready to create your first Agent.

2)The first screen you will see will be similar to the below screen. We are going to get going straight away and click the "Create Agent" button.

3) The Create Agent page looks like this:

For this example, I have called the Agent "TalendFirstAgent", I have set the Agent Type to "Private", set the Description to "Talend First Agent" and the Default Time Zone to "(GMT0:00) Europe/London". You can set these values as you wish, but it probably makes sense to not change this too much for now, unless you are familiar with API.AI.

Once you are happy, click "Save".

My settings can be seen below....

4)Once we have created the Agent, we need to create Intents. An Intent maps between what a user says and the response and or action taken. A more detailed explanation of Intents can be found here. This is a very basic Agent, so we are not going to be doing anything terribly complicated here. The Intents we are going to be using can be seen below....

The Intents seen above are:

Default Fallback Intent - A default Intent that is already present. This is not changed. Goodbye - An Intent to handle exiting the AgentHelp - An Intent to handle describing how to use the AgentReverse what I say sentence - An Intent to handle the primary action of the Agent, reversing sentencesTalend First Agent Welcome - An Intent to handle the welcome message of the Agent

You will see that for your basic Agent there is also an Intent called Default Welcome Intent. For this example, I have updated this to suit the Agent. Edit the Default Welcome Intent and compare what you see with below. Edit yours as appropriate (remembering to change the name to Talend First Agent Welcome) and click the Save button.

Change the name by editing the value in the red box. Then, change the text response options by editing the values in the blue box. You can add more if you wish. Finally, save the Intent by clicking the Save button in the green box.

5) Next we will create the "Help Intent". To do this, go back to the Intents screen and click on Create Intent. You will be presented with a blank Intent screen as seen below:

The sections that need to be filled in are surrounded by colored boxes (red, blue and orange). In this example, the following is what is used:

Intent Name - Help User Says - HelpText Response - To get me to reverse the words in a sentence say "Reverse what I say" followed by any sentence. To exit say "Goodby" or "Bye". To repeat this help description say "Help".

Once the values have been set, click the Save button circled in green.

The fully configured Intent can be seen below....

6) The next Intent we will create will be the Goodbye Intent. This Intent is used to exit the Agent. To create this, we do exactly the same as with the Help Intent, but obviously we don't use the same values. All of the same fields need to be filled in, but with one extra to end the Agent. Below is a screenshot of the Intent configuration...

The sections that need to be filled in are surrounded by colored boxes (red, blue, orange and purple). In this example, the following is what is used:

Intent Name - Goodbye User Says - Goodbye and Bye (two values mean that the Agent will respond to either).Text Response - See you later! and Goodbye! (two values enable the Agent to reply with either randomly)End conversation - Ticked (to get to this simply click on the "Actions on Google" heading)

The End conversation tick box ensures that the Agent will shut down after this is called. Once the values have been set, click the Save button circled in green.

7) The last Intent to create is the Reverse what I say sentence Intent. But before this is done, we need to ensure we have configured our Agent to use a Webhook for the Fulfillment option. This is where we can make use of a web service and where we can hook Talend into the procedure. To configure this, we need to click on the Fulfillment section of the menu on the left of the Agent screen. Once clicked, you are presented with the screen below. This has already been configured in my version and you will not see all of the options you do below. To see those options, simply set the Enabled slider to On.

The red box is where the Enabled switch/slider is located.

The blue box is where you enter your web service endpoint. It is important to note that Google Home (or Actions on Google) will only allow you to use a https endpoint. At present we have not configured API.AI to use Actions on Google. As such, at this point you can use an unsecure http endpoint. This is great for testing, but will only allow you to test within the API.AI environment and not by using Google Home. The orange box allows the webhook to be used for all domains. I don't do a great deal with Domains in this tutorial, but if you want to find out more look here.

Once this is configured, click on the Save button surrounded by a green box above.

Remember that this webhook will not function until we have a web service running. The Agent will still appear to work in the API.AI test environment, but you will not get the response generated by the web service.

To call this web service eventually, we need to configure the Reverse what I say sentence Intent. This is shown below.

8)The Reverse what I say sentence Intent is where the action happens. This is the most complicated of the Intents we are creating, but it really isn't that complicated at all. The fully configured Intent can be seen below....

The sections that need to be filled in are surrounded by colored boxes (purple, red, blue and orange). There are other sections populated, but I will describe those sections after this. In this example, the following is what is used:

Intent Name - Reverse what I say sentenceUser Says - Reverse what I say sentence (you will notice that sentence is highlighted yellow. This is described below)Required - This tick box indicates that the parameter is required. This parameter is created automatically after a slight update to the "User Says" section. This is described below.Use webhook - Ticked (to get to this Intent to trigger the web service we will create using Talend)

9)You will notice that in the previous screenshot that there is a parameter configured which I didn't cover. The "sentence" parameter I felt needed a little extra explanation to enable you to do it correctly. I found when I was playing around with this for the first time that other tutorials glossed over this and this led me to making a few mistakes. In reality creating this parameter is very easy and practically automatic if done in the correct way. The screen shot below shows what you will see when you are attempting this correctly. Essentially you simply highlight the "sentence" section of the "User Says" value you have input

To get this drop down you must highlight the text using your mouse.

From the drop down, I selected the @sys.all option, as shown below....

Once this is selected, the other parameter sections are automatically populated. As shown below....

The Required tick box in the blue box is not automatically ticked. I did this after the section had been automatically created.

The Talend Google Home Web Service

We are now at the point where we need a web service in order to test out our Agent created above. The web service we are creating here is the web service that corresponds to the endpoint specified for the Webhook above. This web service must be a REST service which makes a POST request. It will receive a JSON message in the POST body. The format of that message is described here. The format of the message that we must send back is described here. So long as we adhere to the input and output requirements, our web service can do pretty much what we want. For this example it is pretty simple. It receives the request, reorders the sentence that is supplied and returns the reordered sentence in the response message.

I mentioned earlier that Actions on Google requires a secure web service, however API.AI does not. As such we can actually test our web service within the Talend Studio environment so long as we can expose the endpoint to the outside world. This usually involves a bit of port forwarding from your router to the IP address of your development machine for the port your service is running on. There are lots of tutorials on this and it is probably best to look for one related to the router you have.

OK, so now we are ready to build the Talend Google Home Web Service.

Below you can see a screen shot of the web service.

This service makes use of 4 Talend components and a code routine.

The code routine can be seen below....

The TalendFirstAgentUtils Routine

This routine simply takes a sentence String and a separator String and uses it to split the sentence into words and reorder those words so that they are reversed. The Java to do this can be seen below.

Once the above routine has been created, it is simply a case of joining the components as seen in the screenshot of the web service and configuring them. I shall go through each component below.

1) "tRESTRequest_1" (tRESTRequest)

This component is used to configure the REST service. We need to ensure that it is set up to receive a POST body supplied as a JSON String. The screen shot below shows the configuration of this component....

The REST Endpoint is set to "/googlehome". This is in the red box above. The HTTP Verb is set to POST. This is in the orange box above. The URI Pattern which forms the end of the endpoint is set to "/action". This is in the green box above. The Consumes value is set to JSON. This is to inform the service to expect a JSON body. This is set in the purple box above.

Once the values specified above have been set, click on the Output Flow button (in the blue box) to configure the input schema. This will produce the following popup window....

Set the schema name to body. Then click OK. Once you click OK the schema window will appear as below....

Here we want a single String column called body created. To do this click on the green plus symbol (surrounded by the green box above) and give the column the name "body".

The tRESTRequest component is now configured.

2) "Extract sentence" (tExtractJSONFields)

This component is used to extract the sentence parameter from the JSON String that is supplied to the service as the body. In order to do this I am using JSONPath. As mentioned before, the format of the message we are working with is here. This component is configured as below....

The Read By drop down is set to JsonPath (in the red box above). The JSON Field drop down is set to body (in the blue box above). The Loop Jsonpath query value is set to "$.result.parameters" (in the orange box above). This is used to set the loop path to the "parameters" section of the JSON String. The "sentence" column is mapped to ".sentence" (in the green box above).

Once this component has been configured as above, we are ready to move on to the next component.

3) "Reorder sentenced" (tMap)

This component is used to call the reverseSentence method (the code for this is shown at the beginning of this section). It also builds the response JSON String which is described here. I have chosen to hard code most of this for this tutorial, but you could choose to make this a lot more dynamic depending on the functionality you require. The configuration of the tMap can be seen below....

As you can see, there are 2 tMap variables (sentence_exists and reversed_sentence) and one output value. The tMap variable code is shown below:

The code above sets the reversed_sentence tMap variable value to be the text displayed the Agent (if using a text input output device) and to be the text spoken by a device that can communicate by speech. The rest of JSON String is hardcoded.

4) "tRESTResponse_1" (tRESTResponse)

This component is used to send the response message to the calling Agent. This component needs no specific configuration and can be simply connected to the tMap component preceeding it.

Testing the Web Service with the Agent

As I said earlier, the Actions on Google functionality cannot be used with an insecure web service. However, the API.AI functionality can be. In order to test what we have done so far we can do the following.....

1) Test using the Web Service in the Studio - To do this you will need to sort out port forwarding to expose your Web Service endpoint to the outside world, run the service in the Studio, update the Fulfillment section of API.AI to point to the Web Service endpoint and use the API.AI testing resource.2) Test using the Web Service in Apache Karaf - To do this you will need to set up Apache Karaf on your local machine or a web server. If it is configured locally, you will need to sort out port forwarding to expose the endpoint to the outside world. If it is set up on a web server, then port forwarding will not be necessary. Once this is done you will need to update the Fulfillment section of API.AI to point to the Web Service endpoint and then use the API.AI testing resource.

To get access to the API.AI testing resource, go to the Integrations menu option (on the left hand side of the API.AI interface), click on it and you will be presented with the screen below....

Ensure that "Publish" is selected (surrounded by the green box) and then click on the URL surrounded by the blue box. This will take you to the screen below....

As you can see, when you type "Reverse what I say hello world" the Agent responds "world hello". So it is working as expected.

Hooking the Agent up to Google Home

Now that we know the Agent is working, we can configure it to run with Google Home or the Actions on Google Web Simulator. Before we can do this, we need to remember that the Web Service must be available on a secure endpoint (https). If you cannot achieve this yourself, you can use the Web Service that I have exposed here https://www.rilhia.com:9001/services/googlehome/action. Please don't overuse this though. Remember to update the Fulfillment Webhook section of your Agent.

Once the Agent has been updated to use the secure Web Service, we need to update it to use Google Home or the Actions on Google Web Simulator. This is done simply by going to the Integrations menu option, click it and you will be presented with the following screen:

In order to switch the Actions on Google functionality on, simply click the switch surrounded in a blue box above.

You will then be presented with the following screen where you will need to click Authorize and then Close.

Once this has been done, we are ready to test on the Actions on Google Web Simulator. Click on the Actions on Google Web Simulator link to open it in a new window. You will see the following screen once you have logged in (you will need your Google Account). I use Chrome to ensure I am always logged in.

Click on the Action Package button (surrounded by a red box) in order to activate the Agent. You will see the following screen....

Click the Preview for 24 Hours link (surrounded by a red box above). The Agent is now authorized to run. Now we can test it. Make sure your speakers are turned on.

To initiate the Agent we need to type "Let me talk to Talend First Agent" in the dialog box. Once you hit enter you will see something similar to the screen below if everything has gone correctly.....

I have covered a couple of the lines of JSON since these hold authentication key values.

Once we have initiated the Agent, we can ask it to perform its function. To do this we type (or say if you have your mic switched on) "Reverse what I say hello world". Once done, you will see the following....

If you have got this far, it is working.

To test this on a Google Home device is very simple after you have got this far. You simply need to make sure your Google Home device is using the same Google Account as you have been working with here. Once this is sorted, say "Let me talk to Talend First Agent". It will then respond with its intro message. You can then have a play. I tested with "How much wood would a woodchuck chuck if a woodchuck could chuck wood?" It should turn out something like below....

Hopefully, you have got this tutorial working and are able to reproduce something similar to what you can see in the video. While this is not the most useful way of using this functionality, I hope it provides an idea of how these technologies can be combined to provide some very powerful functionality. Personally, I can think of dozens of potential uses for this and it also opens many doors that Google

Personally, I can think of dozens of potential uses for this and it also opens many doors that Google has left shut so far with the access they have given via their API. Using Talend with this API you can read emails, make use of the uPnP functionality I have described in other tutorials, get news articles read to you, start Talend jobs, send Tweets, update Facebook, interact with many other IOT devices, the list is almost endless. If you have a Google Home device and access to Talend Enterprise Edition you can also use it with the Talend Metaservlet which opens many voice activated Talend doors. As I said in the title, this is potentially pretty awesome!!

A copy of the completed tutorial can be found here. This tutorial was built using Talend ESB 6.2 It can also be imported into subsequent versions. However, it cannot be imported into earlier versions, so you will either need to upgrade or recreate it following the tutorial.

Please feel free to contact me about any issues you may have with this tutorial. While I have only very recently started looking at Google Home and the API.AI functionality, I might be able to help you with problems experienced with this tutorial. It might also help highlight areas that need correcting or elaborating on. Have fun!

About the Author – Richard Hall

Richard comes from a background of over 10 years working in Data Integration and has moved his focus to Application Integration over the last few years. Throughout his career, he has worked in high pressure, delivery driven environments. He has provided Data Integration and Application Integration consulting services in Africa, Asia, North America, Europe, and Australia, to Banks, Telcos, Insurance companies, Finance companies, Media leaders and many other smaller entities.