Actions on Google is a developer platform that lets you create software to extend the functionality of the Google Assistant, Google's virtual personal assistant, across more than 1 billion devices, including smart speakers, phones, cars, TVs, headphones, and more. Users engage Google Assistant in conversation to get things done, like buying groceries or booking a ride (for a complete list of what's possible now, see the Actions directory.) As a developer, you can use Actions on Google to easily create and manage delightful and effective conversational experiences between users and your own 3rd-party service.

This codelab module is part of a multi-module tutorial. Each module can be taken standalone or in a learning sequence with other modules. In each module, you'll be provided with end-to-end instructions on how to build an Action from given software requirements. We'll also teach the necessary concepts and best practices for implementing Actions that give users high-quality conversational experiences.

This codelab covers intermediate level concepts for developing with Actions on Google. We strongly recommend that you familiarize yourself with the topics covered in Build Actions for the Google Assistant (Level 1) before starting this codelab.

What you'll build

In this codelab, you'll build a sophisticated conversational Action with multiple features:

Supports deep links to directly launch the user into certain points of dialog.

Uses utilities provided by the Actions on Google platform to fetch the user's name and address them personally.

For this codelab, you're going to start with the Dialogflow intents from the Level 1 codelab, but develop and deploy the webhook locally on your machine using Cloud Functions for Firebase.

Key terms:

Cloud Functions for Firebase: A service provided by Google that lets you run your webhook code in a managed environment on the cloud. You use the Firebase CLI (command-line interface) tools to deploy your webhook to Google Cloud Functions.

Why develop your Actions locally?

In contrast to using the Dialogflow inline editor, you can use a local machine, which gives you more control over your programming and deployment environment. This provides several advantages:

A locally-developed webhook supports a much larger suite of Actions on Google features.

You are not restricted to only one webhook implementation file (index.js).

Developing in a local environment also lets you follow software engineering best practices, such as using version control, for your Actions project.

Download your base files

To get the base files for this codelab, run the following command to clone the GitHub repository for the Level 1 codelab.

git clone https://github.com/actions-on-google/codelabs-nodejs

This repository contains the following important files:

level1-complete/functions/index.js. The Javascript file that contains your webhook's fulfillment code. This is the main file that you'll be editing to add additional Actions and functionality.

level1-complete/functions/package.json. This file outlines dependencies and other metadata for this Node.js project. You can ignore this file for this codelab; you should only need to edit this file if you want to use different versions of the Actions on Google client library or other Node.js modules.

level1-complete/codelab-level-one.zip. This is the Dialogflow agent file for the Level 1 codelab. If you've already completed the Level 1 codelab, you can safely ignore this file.

For the sake of clarity, we strongly recommend you rename the /level1-complete directory name to /level2. You can do so by using the mv command in your terminal. For example:

$ cd codelabs-nodejs
$ mv ./level1-complete ./level2

Check your Google permission settings

In order to test the Action that you'll build for this codelab, you need to enable the necessary permissions.

Paste the URL you copied from the Firebase dashboard if it doesn't already appear.

Click Save.

Verify your project is correctly set up

At this point, users can start a conversation by explicitly invoking your Action. Once users are mid-conversation, they can trigger the ‘favorite color' custom intent by providing a color. Dialogflow will parse the user's input to extract the information your fulfillment needs—namely, the color—and send this to your fulfillment. Your fulfillment then auto-generates a lucky number to send back to the user.

Tip: You can find the most recent information about using the Actions simulator in this guide. Please refer there if you run into any issues following the steps listed below.

To test out your Action in the Actions simulator:

In the Dialogflow console left navigation, click on Integrations > Google Assistant.

If you don't see this response, there was likely an issue with your Firebase setup. In this case, try repeating the steps under the ‘Deploy your fulfillment' section.

Your Actions project always has an invocation name, like "Google IO 18". When users say "Talk to Google IO 18", this triggers the Dialogflow welcome intent. Every Dialogflow agent has one welcome intent which acts as an entry point for users to start conversations.

Most of the time, users would rather jump to the specific task they want to accomplish than start at the beginning of the conversation every time. You can provide explicit deep links and implicit invocations as shortcuts into the conversation to help users get things done more efficiently.

Key terms

Explicit invocation: Where users invoke the welcome intent by using the Action name (e.g. "Hey Google, talk to Movie Time"). Users can also deep link to an Action by saying an Action invocation phrase along with an intent (e.g. "Hey Google, talk to Movie Time to look up showtimes for today").

Implicit invocation: When users invoke your Action without using its name (e.g. "Hey Google, I need a chicken soup recipe"). The Assistant matches the user's request based on various signals, and presents the user with a selection of Actions they can choose to fulfill their intent.

Adding deep links and implicit invocations to your Actions is a simple, single-step process using the Google Assistant integration page in the Dialogflow console.

Add intent for deep linking and implicit invocation

In your Actions project, you should have defined a custom Dialogflow intent called ‘favorite color' in an agent (this was covered in the Level 1 codelab). The agent parses your training phrases, like "I love yellow" and "Purple is my favorite," extracts the color parameter from each phrase, and makes it available to your fulfillment.

For this codelab, you're going to add the ‘favorite color' intent as an implicit invocation, meaning that users can invoke that intent and skip the welcome intent. Doing this also enables users to explicitly invoke the ‘favorite color' intent as a deep link (for example, "Hey Google, talk to my test app about blue"). The training phrases and parameters you defined for the ‘favorite color' intent enable Dialogflow to extract the color parameter when users invoke this deep link.

To add your intent for deep linking and implicit invocation, do the following:

Under Discovery > Implicit Invocations, click on Add intent followed by favorite color.

The Assistant will now listen for users to provide a color in their invocation and extract the color parameter for your fulfillment.

Test your deep link

Tip: You can find the most recent information about using the Actions simulator in this guide. Please refer there if you run into any issues following the steps listed below.

To test out your deep link in the Actions simulator:

Click Test to update your Actions project.

Type "Talk to my test app about blue" into the Input field and hit enter.

Define a custom fallback intent

It's good practice to create a custom fallback intent to handle invocation phrases that don't provide the parameters you are looking for. For example, if instead of saying a color, the user might say something unexpected like "Talk to my test app about bananas". The term "bananas" would not fit into any of our Dialogflow intents, so we'd need to build a catch-all intent.

Since the Assistant now listens for any phrases which match the ‘favorite color' intent, you should provide a custom fallback intent specific for catching anything else.

To set up your custom fallback intent, do the following:

In the Dialogflow console, click on Intents in the left-navigation, then click Create Intent.

Name your intent ‘Unrecognized Deep Link' or equivalent. This intent name won't be referenced in your webhook so you can call it whatever you like.

Tip: The intent name is case-sensitive. When referencing it in your webhook, make sure to type it in exactly as you specified in the Dialogflow console.

Under Contexts, click Add input context and type "google_assistant_welcome". By specifying that this intent uses the ‘google_assistant_welcome' input context, it can only be invoked at the start of the conversation. After you've entered your input context, "google_assistant_welcome" will appear as an output context as well. Click the x to remove that output context.

Under Training phrases, add "banana" (or any other noun) as a user expression.

We'll use the @sys.any entity to tell Dialogflow to generalize the expression to any grammar (not just "banana"). Double-click on "banana" and filter for or select @sys.any

A warning message will pop up not to use the @sys.any entity. You can safely ignore this for now. Click OK. (Generally, it's not advisable to use the @sys.any entity, since it can overpower any other intent's speech biasing, but this is a special case where we ensure this will only be triggered at invocation time when other intents have not been matched.)

Under Responses, add "Sorry, I am not sure about $any . What's your favorite color?" as a Text response.

Click Save.

Test your custom fallback intent

Tip: You can find the most recent information about using the Actions simulator in this guide. Please refer there if you run into any issues following the steps listed below.

To test out your custom fallback intent in the Actions simulator, type "Talk to my test app about banana" into the Input field and hit enter.

You can make your Actions more engaging and interactive by using personalized information from the user. To request access to user information, your webhook can use helper intents to obtain values with which to personalize your responses.

Key terms:

Helper intents: Your fulfillment calls a helper intent when it wants the Assistant to handle part of a conversation. Helper intents return frequently-requested information such as the user's name and location (actions_intent_PERMISSION), a user selection (actions_intent_OPTION), or a date and time (actions_intent_DATETIME). To learn more about helper intents, see this guide.

Get user information using permission helper intent

You can use the actions_intent_PERMISSION helper intent to obtain the user's display name, with their permission. To use the permission helper intent:

Customize responses with user information

Next, you'll need to update your webhook to handle the response. You'll use the user's information in your response if they granted permission and gracefully move the conversation forward regardless if permission was not granted.

You register a callback function to handle the actions_intent_PERMISSION intent you created earlier. In the callback, you first check whether the user granted permission to know their display name. The client library passes this argument to the callback function as the third parameter, here called permissionGranted.

The conv.user.name.display value represents the user's display name sent to our webhook as part of the HTTP request body. If the user grants permission, you store the value of conv.user.name.display in a property called userName of the conv.data object.

To provide additional hints to the user on how to continue the conversation, you call the Suggestions() function to create suggestion chips that recommend some example colors. If the user is on a device with a screen, they can provide their input by tapping on a chip rather than by saying or typing their response.

Tip: Suggestion chips are useful to hint at responses to continue or pivot the conversation. If during the conversation there is a primary call for action, consider listing that as the first suggestion chip. For best practices in using suggestion chips, see the conversational design guide.

The conv.data object is a data structure provided by the client library for in-dialog storage. You can set and manipulate the properties on this object throughout the duration of the conversation for this user.

Here you modify the callback function for the ‘favorite color' intent to use the userName property to address the user by name. If the conv.data object doesn't have a property called userName (that is, the user previously denied permission to know their name, so the property was never set) then your webhook still responds, but without the user's name.

Save your file .

In the terminal, run the following command to deploy your webhook to Firebase.

firebase deploy --project <PROJECT_ID>

Test your code

Tip: You can find the most recent information about using the Actions simulator in this guide. Please refer there if you run into any issues following the steps listed below.

To test out your Action in the Actions simulator:

Type "Talk to my test app" into the Input field and hit enter.

Type "yes". You should see a prompt followed by suggestion chips displayed under Suggested input.

Type "blue".

You can embed SSML in your response strings to alter the sound of your spoken responses, or even embed sound effects or other audio clips.

Key terms:

Speech Synthesis Markup Language (SSML): A markup language that provides a standard way to mark up text for the generation of synthetic speech. You can use SSML when you need additional control over how the Assistant generates the speech from the text in your response. To learn more about SSML in Actions for Google, see this guide.

Here, you declare an audioSound variable containing the string URL for a statically hosted audio file on the web. You use the <speak> SSML tags around the strings for the user response, indicating to the Google Assistant that your response should be parsed as SSML.

The <audio> tag embedded in the string indicates that you want the Assistant to play some audio played at that point in the response. The src attribute of that tag indicates where the audio is hosted.

Save your file.

In the terminal, run the following command to deploy your webhook to Firebase.

firebase deploy --project <PROJECT_ID>

Test your code

Tip: You can find the most recent information about using the Actions simulator in this guide. Please refer there if you run into any issues following the steps listed below.

To test out your Action in the Actions simulator:

Type "Talk to my test app" into the Input field and hit enter.

Type "yes".

Type "blue". If everything works correctly, the user should hear this sound effect in the response.

To keep the conversation going, you can add follow-up intents that will trigger based on the user's response after a particular intent. To add follow-up intents to ‘favorite color', do the following:

Tip: If you test your Action at this point, your conversation ends when you trigger the favorite color intent. This is because the intent's fulfillment currently calls the conv.close method. In the next step, you'll update your fulfillment to call conv.ask instead, which allows the conversation to continue.

As you expand your conversational app, you can use custom entities to further deepen and personalize the conversation. We'll cover how to do this in this section.

Add a custom entity

So far, you've only been using built-in entities (@sys.color,@sys.any) to match user input. You're going to create a custom entity (also called a developer entity) in Dialogflow so that, when a user provides one of a few fake colors, you can follow up with a custom response from your webhook.

In the constructor calls, you pass configuration options relevant to each specific color, including:

a title with the color name,

a text description providing more detail about the card's contents,

a URL for an image which represents that color and its alt-text, and

a ‘display' configuration' which determines a background color (in this case, plain white).

Finally, you set a callback function for the ‘favorite fake color' intent, which uses the fakeColor option that the user selected to create a card corresponding to that fake color and present it to the user.

In the terminal, run the following command to deploy your webhook to Firebase.

firebase deploy --project <PROJECT_ID>

Test your code

Tip: You can find the most recent information about using the Actions simulator in this guide. Please refer there if you run into any issues following the steps listed below.

To test out your Action in the Actions simulator:

Type "Talk to my test app" into the Input field and hit enter.

Type "yes".

Type "blue".

Type "sure".

Type "tell me about the first one".

When you select a fake color, you should receive a response that includes a basic card, similar to below.

Congratulations!

You've now covered the intermediate skills necessary to build conversational user interfaces with Actions on Google.

What we've covered

How to set up for local fulfillment development using the Node.js client library and Firebase command-line deployment.

How to create deep links for your Actions using the Google Assistant integration in Dialogflow.

How to use Actions on Google helper intents to personalize your responses.

What's next

In the next codelab, you'll make further refinements to your Actions project. You'll learn more about conversational design, how to handle user silence, and how to present users with a visual selection response on devices with supported screens.