Today, we're pleased to make a tool with source code available to allow you to graphically design interactive adventure games for Alexa. Interactive adventure games represent a new category of skill that allows customers to engage with stories using their voice. With these skills, you can showcase original content or build compelling companion experiences to existing books, movies and games. For example, in The Wayne Investigation skill (4.7 stars, 48 reviews), you’re transported to Gotham City a few days after the murder of Bruce Wayne’s parents. You play the part of a detective, investigating the crime and interrogating interesting characters, with Alexa guiding you through multiple virtual rooms, giving you choices, and helping you find important clues. The Magic Door, an original adventure series for Alexa, enables you to tell Alexa what choices to make as you navigate a forest, a garden or an ancient temple. Learn more about game skills on Alexa.

This tool provides an easy to use front-end that allows developers to instantly deploy code for your story, or use the generated code as a starting point for more complex projects. It was written in Node.js by Thomas Yuill, a designer and engineer in the Amazon Advertising team. The tool is available now as a Github project: https://github.com/alexa/interactive-adventure-game-tool

If you want to get started quickly, you can use our Trivia or Decision Tree skill templates that make it easy for developers or non-developers to create game skills. These template makes it easy for developers or non-developers to create a skill similar to “European Vacation Recommender” or “Astronomy Trivia." The templates leverages AWS Lambda and the Alexa Skills Kit (ASK) while providing the business logic, use cases, error handling and help functions for your skill. You just need to come up with a decision tree-based idea or trivia game, plug in your questions and edit the sample provided (we walk you through how it’s done). It's a valuable way to quickly learn the end-to-end process for building and publishing an Alexa skill.

For inspiration on developing innovative Alexa skills, check out the Wayne Investigation, a skill developed by Warner Bros. to promote the recently released Batman v Superman: Dawn of Justice feature film. In this audio-only, interactive adventure game, you’re transported to Gotham City a few days after the murder of Bruce Wayne’s parents. You play the part of a detective, investigating the crime and interrogating interesting characters, with Alexa guiding you through multiple virtual rooms, giving you choices, and helping you find important clues.

The game, created using the Alexa Skills Kit, is collaboration between Amazon, Warner Bros., head writers at DC Comics, and Cruel & Unusual Films (the production house run by Batman v Superman’s director Zack Snyder and executive producers Debbie Snyder and Wes Coller). With these companies behind the game and its affiliation with a superhero film franchise, it’s not surprising that The Wayne Investigation was a big hit.

But it’s become enormously popular on its own accord. Launched on March 1, this was the first Alexa skill to combine Alexa technology with produced audio assets—namely, compelling music and sound effects—and the response has been extraordinary. During its first week, the Wayne Investigation was engaged 7x more (per weekly average) than all other skills combined. Currently the Wayne Investigation rates in the top 5% of skills (earning 4.8 out of 5 stars) and is the #1 skill for both total time spent engaging with the skill and average time spent per user.

The team scripted the experience by building it around a gaming map with directions and actions in each room. Once the script was finalized, they used a decision tree model to translate the experience into code, which is hosted in AWS. From three starting actions, users can make up to 37 decisions, each taking the user down paths that lead to new and iconic Gotham characters and locations before completing the game. An efficient (and lucky) walkthrough of the Wayne Investigation takes 5 to 10 minutes, but fans who want to explore every nook and cranny can spend as long as 40 minutes in this Gotham City.

An added benefit of creating the Wayne Investigation skill is that it led to the creation of a tool that allows developers to graphically design interactive adventure games. Today, we’re pleased to announce that we’ve made a tool with source code available to make it easier for the Alexa community to create similar games.

To experience the skill, simply enable it in your Alexa companion app and then say, “Alexa, open the Wayne Investigation.”

We are excited to introduce a new way to help you quickly build useful and meaningful skills for Alexa. The new Decision Tree skill template makes it easy for developers and non-developers to create skills that ask you a series of questions and then give you an answer. This is a great starter for simple adventure games and magazine style quizzes like ‘what kind of job is good for me’. This template leverages AWS Lambda and the Alexa Skills Kit, and provides built-in business logic, use cases, error handling, and help functions for your new skill. Simply come up with the idea, plug in your decision tree content, and edit the sample provided. Follow this tutorial and we'll show you how it's done.

Using the Alexa Skills Kit, you can build an application that can receive and respond to voice requests made to Alexa. In this tutorial, you’ll build a web service to handle notifications from Alexa and map this service to a skill in the Amazon Developer Portal, making it available on your Echo, Alexa-enabled device, or Echosim.io for testing and to all Alexa users after publication.

When finished, you'll know how to:

Create a skill - This tutorial will walk Alexa developers through all the required steps involved in creating a skill. No previous experience required.

Design a Voice User Interface - Creating this skill will help you understand the basics of creating a working Voice User Interface (VUI) while using a cut/paste approach to development. You will learn by doing and end up with a published Alexa skill. This tutorial includes instructions on how to customize the skill and submit for certification. For guidance on designing a voice experience with Alexa you can also watch this video.

Use JavaScript/Node.js and the Alexa Skills Kit to create a skill - You will use the template as a guide but the customization is up to you. For more background information on using the Alexa Skills Kit please watch this video.

Get your skill published - Once you have completed your skill, this tutorial will guide you through testing your skill and sending your skill through the publication process to make it available for any Alexa user to enable.

Today’s guest blog post is from Troy Petrunoff, content strategist at AngelHack. Amazon works with companies like AngelHack who are dedicated to advancing the art of voice user experience through hackathons.

This year Amazon Alexa teamed up with AngelHack, the pioneers of global hackathons, for their ninth Global Hackathon Series. Since 2011, the series has exposed over 100,000 developers from around the world to new technologies from sponsors ranging from small startups to large corporations. Amazon Alexa joined the fun this year at nine AngelHack events, sending Solutions Architects and Amazon Echo devices to give talented developers, designers, and entrepreneurs the chance to learn about the Alexa technology. Thirty two teams included Alexa technology into their projects.

Of the nine events Amazon Alexa sponsored, three of the grand prize winners won using Alexa. Winning the AngelHack Grand Prize earned these teams an exclusive invite into the AngelHack HACKcelerator program. AngelHack’s invite-only HACKcelerator program connects ambitious developers with thought leaders and experienced entrepreneurs to help them become more versatile, entrepreneurial, and successful. The program is intended to give developers of promising projects built at a hackathon the opportunity to listen and talk to some of the biggest players in the Silicon Valley tech scene on a weekly basis. All while providing them with the resources to successfully transition their Hackathon project into a viable startup with early traction.In addition to the grand prize, the Amazon Alexa team offered a challenge at each AngelHack event. The challenge for the series was best voice user experience using Amazon Alexa. In addition to the three grand prize winning teams, two Alexa Challenge winners will also receive an invite into the HACKcelerator program. Participating teams of the HACKcelerator will be provided with mentorship and other resources to prepare them for the Global Demo Day in San Francisco.

Earlier this year, Paul Cutsinger, Evangelist at Amazon Alexa, joined a team of developers and designers from Capital One at SXSW in Austin to launch the new Capital One skill for Alexa. The launch of the new skill garnered national attention, as Capital One was the first company to give customers the ability to interact with their credit card and bank accounts through Alexa-enabled devices. This week at the Amazon Developer Education Conference in NYC, Capital One announced another industry first by expanding the skill to enable its customers to access their auto and home loan accounts through Alexa.

"The Capital One skill for Alexa is all part of our efforts to help our customers manage their money on their terms – anytime and anywhere," said Ken Dodelin, Vice President, Digital Product Management, Capital One. “Now, you can access in real time all of your Capital One accounts—from credit cards to bank accounts to home and auto loans—using nothing but your voice with the Capital One skill.”

The skill is one of the top-rated Alexa skills, 4.5/5 stars, with 47 reviews. It enables Capital One customers to stay on top of their credit card, auto loan, mortgage and home equity accounts by checking their balance, reviewing recent transactions, or making payments, as well as get real-time access to checking and savings account information to understand their available funds.

“Capital One has a state of the art technology platform that allows us to quickly leverage emerging technologies, like Alexa." Scott Totman, Vice President of Digital Products Engineering, Capital One said. “We were excited about the opportunity to provide a secure, convenient, and hands-free experience for our customers.”

Building the Skill

To bring the new skill to life, the Capital One team – comprised of engineers, designers, and product managers – kicked off a two-phase development process.

“Last summer a few developers started experimenting with Echo devices, and, ultimately, combined efforts to scope out a single feature: fetching a customer’s credit card balance. That exercise quickly familiarized the team with the Alexa Skills Kit (ASK) and helped them determine the level of effort required to produce a full public offering,” said Totman. “The second phase kicked off in October and involved defining and building the initial set of skill capabilities, based on customer interviews and empathy based user research. Less than six months later we launched the first version of the Capital One skill for Alexa.”

The team also spent a lot of time finding the right balance between customers’ need for both convenience and security. In the end, Capital One worked with Amazon to strike the right balance and gave customers the option of adding a four-digit pin in order to access the skill and provide an additional layer of security. The pin can be changed or removed at the customer’s discretion.

“The Alexa Skills Kit is very straightforward. However, it is evolving quickly, so developers need to pay close attention to online documentation, webinars, and other learning opportunities in order to stay on top of new features and capabilities as they are released,” Totman said.

Finding the Right Voice

“We dedicated a lot of time to getting the conversation right from the start,” said Totman. “This meant we not only had to anticipate the questions customers were going to ask, but also how they were going to ask them.”

This was a really interesting challenge for Capital One’s design team. In order to make the skill feel like a personalized conversation, the team had to identify exactly where and how to inject personality and humor, while carefully considering customers’ priorities and the language they use to discuss finances.

“A lot goes into making sure our customers get what they expect from our personality, as well as what they expect from Alexa’s personality. That becomes especially visible when injecting humor, because what looks great on paper doesn’t always transition to the nuance of voice inflection, cadence, or the context of banking,” said Stephanie Hay, head of Capital One’s content strategy team. “But that’s the joy of design > build > iterate in a co-creation method; product, design, and engineering design the conversation together, hear Alexa say it, react, iterate, test it with actual customers, iterate further, and then get it to a point we all feel excited about.”

Looking Ahead

Capital One’s Alexa skill represents just the starting lineup of features. Capital One’s team continues to test, learn, and explore new features by focusing on customer needs and continually refining the experience.

“As customers become more familiar using voice technologies, we anticipate growing demand for feature capabilities, as well as increased expectations regarding the sophistication of the conversation.” Totman said. “With voice technologies, we get to learn firsthand how customers are attempting to talk to us, which allows us to continually refine the conversation.”

“The possibilities with the Alexa Skills Kit are nearly endless, but I advise developers to be very thoughtful about the value of their skill,” said Totman. “Leveraging voice-activated technology is only worthwhile if you can clearly define how your solution will go above and beyond your existing digital offerings.”

Stay tuned to part two to learn how Capital One built their Alexa skill and added new capabilities.

Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.

Get Started with Alexa Skills Kit

Are you ready to build your first (or next) Alexa skill? Build a custom skill or use one of our easy tutorials to get started quickly.

Today, we’re excited to announce the Amazon Alexa session track at AWS re:Invent 2016, the largest gathering of the global Amazon developer community. AWS re:Invent provides an opportunity to connect with peers and technology experts, engage in hands-on labs and bootcamps, and learn about new technologies and how to improve productivity, network security, and application performance, all while keeping infrastructure costs low. AWS re:Invent runs November 28 through December 2, 2016.

The Alexa track at AWS re:Invent will dive deep into the technology behind the Alexa Skills Kit and the Alexa Voice Service, with a special focus on using AWS Services to enable voice experiences. We’ll cover AWS Lambda, DynamoDB, CloudFormation, Cognito, Elastic Beanstalk and more. You’ll hear from senior evangelists and engineers and learn best practices from early Alexa developers. Here’s an early peek at the Alexa sessions.

Title

Time

Level

Description

ALX 201: How Capital One Built a Voice Experience for Banking

Tuesday, November 29, 2016

10:00 AM - 11:00 AM

Introductory

As we add thousands of skills to Alexa, our developers have uncovered some basic and more complex tips for building better skills. Whether you are new to Alexa skill development or if you have created skills that are live today, this session will help you understand how to create better voice experiences. Last year, Capital One joined Alexa on stage at re:Invent to talk about their experience building an Alexa skill. Hear from them one year later to learn from the challenges that they had to overcome and the results they are seeing from their skill.

ALX 202: How Amazon is Enabling the Future of Automotive

Thursday, December 1, 2016

11:30 AM - 12:30 PM

Introductory

The experience in the auto industry is changing. For both the driver and the car manufacturer, a whole new frontier is on the near horizon. What do you do with your time while the car is driving itself? How do I have a consistent experience while driving shared or borrowed cars? How do I stay safer and more aware in the ever increasing complexity of traffic, schedules, calls, messages and tweets? In this session we will discuss how the auto industry is facing new challenges and how the use of Amazon Alexa, IoT, Logistics services and the AWS Cloud is transforming the Mobility experience of the (very near) future.

ALX 301: Alexa in the Enterprise: How JPL Leverages Alexa to Further Space Exploration with Internet of Things

Wednesday, November 30, 2016

5:00 PM - 6:00 PM

Advanced

The Jet Propulsion Laboratory designs and creates some of the most advanced space robotics ever imagined. JPL IT is now innovating to help streamline how JPLers will work in the future in order to design, build, operate, and support these spacecraft. They hope to dramatically improve JPLers' workflows and make their work easier for them by enabling simple voice conversations with the room and the equipment across the entire enterprise.

What could this look like? Imagine just talking with the conference room to configure it. What if you could kick off advanced queries across AWS services and kick off AWS Kinesis tasks by simply speaking the commands? What if the laboratory could speak to you and warn you about anomalies or notify you of trends across your AWS infrastructure? What if you could control rovers by having a conversation with them and ask them questions? In this session, JPL will demonstrate how they leveraged AWS Lambda, DynamoDB and CloudWatch in their prototypes of these use cases and more. They will also discuss some of the technical challenges they are overcoming, including how to deploy and manage consumer devices such as the Amazon Echo across the enterprise, and give lessons learned. Join them as they use Alexa to query JPL databases, control conference room equipment and lights, and even drive a rover on stage, all with nothing but the power of voice!

ALX 302: Build a Serverless Back End for Your Alexa-Based Voice Interactions

Thursday, December 1, 2016

5:00 PM - 6:00 PM

Advanced

Learn how to develop voice-based serverless back ends for Alexa Voice Service (AVS) and Alexa devices using the Alexa Skills Kit (ASK), which allows you to add new voice-based interactions to Alexa. We’ll code a new skill, implemented by a serverless backend leveraging AWS services such as Amazon Cognito, AWS Lambda, and Amazon DynamoDB. Often, your skill needs to authenticate your users and link them back to your backend systems and to persist state between user invocations. User authentication is performed by leveraging OAuth compatible identity systems. Running such a system on your back end requires undifferentiated heavy lifting or boilerplate code. We’ll leverage Login with Amazon as the identity provider instead, allowing you to focus on your application implementation and not on the low-level user management parts. At the end of this session, you’ll be able to develop your own Alexa skills and use Amazon and AWS services to minimize the required backend infrastructure. This session shows you how to deploy your Alexa skill code on a serverless infrastructure, leverage AWS Lambda, use Amazon Cognito and Login with Amazon to authenticate users, and leverage AWS DynamoDB as a fully managed NoSQL data store.

ALX 303: Building a Smarter Home with Alexa

Thursday, December 1, 2016

1:00 PM - 2:00 PM

Advanced

This session introduces the beta process, the Smart Home Skill API, and how to quickly and easily set up a smart home so you can begin using Alexa to control lighting, blinds, and small appliances. We begin by going over what devices you can buy and share and some common best practices when enabling these devices in your home or office. We also demonstrate how to enable these devices and connect them with Alexa. We show you how to create groups and manage your home with your voice, as well as some tips and tricks for managing your home when you are away. This session explains how to use the Smart Home Skill API to create a custom skill to manage your smart home devices as well as lessons learned from dozens of customers and partners. Alexa smart home partner Ecobee joins us to talk about their experience in the Smart Home Skill API beta program.

ALX 304: Tips and Tricks on Bringing Alexa to Your Products

Friday, December 2, 2016

9:30 AM - 10:30 AM

Advanced

Ever wonder what it takes to add the power of Alexa to your own products? Are you curious about what Alexa partners have learned on their way to a successful product launch? In this session you will learn about the top tips and tricks on how to go from VUI newbie to an Alexa-enabled product launch. Key concepts around hardware selection, enabling far field voice interaction, building a robust Alexa Voice Service (AVS) client and more will be discussed along with customer and partner examples on how to plan for and avoid common challenges in product design, development and delivery.

ALX 305: From VUI to QA: Building a Voice-Based Adventure Game for Alexa

Friday, December 2, 2016

11:00 AM - 12:00 PM

Advanced

Hitting the submit button to publish your skill is similar to sending your child to their first day of school. You want it to be set up for a successful launch day and for many days thereafter. Learn how to set your skill up for success from Andy Huntwork, Alexa Principal Engineer and one of the creators of the popular Alexa skill "The Magic Door." You will learn the most common reasons why skills fail and also some of the more unique use cases. The purpose of this session is to help you build better skills by knowing what to look out for and what you can test for before submitting. In this session, you will learn what most developers do wrong, how to successfully test and QA your skill, how to set your skill up for successful certification, and the process of how a skill gets certified.

MAC 202: Deep Learning in Alexa

Introductory

Neural networks have a long and rich history in automatic speech recognition. In this talk, we present a brief primer on the origin of deep learning in spoken language, and then explore today’s world of Alexa. Alexa is the AWS service that understands spoken language and powers Amazon Echo. Alexa relies heavily on machine learning and deep neural networks for speech recognition, text-to-speech, language understanding, and more. We also discuss the Alexa Skills Kit, which lets any developer teach Alexa new skills.

We encourage you to check back because we’ll have more content announcements in the coming months.

In our first post, we shared why Discovery decided to build an Alexa skill and what requirements they outlined as they thought through what the voice experience should look like. In this post, we’ll share how they built and tested their Alexa skill and their tips for other Alexa developers.

Building and Testing the Shark Week Skill

When Stephen Garlick, Lead Development and Operations Engineer at Discovery Channel, took the lead in developing the Alexa skill, it was a chance to learn how to design a new experience for customers. He had no prior experience with AWS Lambda and Alexa Skills Kit (ASK). To start, he spent some time digging into online technical documentation and code samples provided on the Alexa Github repo. This helped him gain a deeper understanding of how to build the foundation of the Alexa skill and handle basic tasks.

By using AWS Lambda and ASK, Stephen and team were able to keep things simple and quickly deploy the code without the need to set up additional infrastructure to support the skill. Additionally, they were easily able to extend the node.js skill without having to create a skill from scratch.

Initially, Discovery used Alexa to respond with facts; later, they decided to customize her voice by using a mp3 playback. To accomplish this, Stephen used the SSML support for mp3 playback and AWS S3 with cloud front for hosting the files reliably. Each mp3 was less than 90 seconds in length, 48 kbps, and adhered to MPEG version 2 specifications. All the resources were created and deployed using the AWS CloudFormation service.

For the countdown feature, Stephen pulled in the moment.js dependency into node.js to help simplify some time-based calculations. The countdown now combines a mp3 playback for everything except the actual time which is played back by Alexa.

To test the skill, they used the skill test pane within the Alexa app. The testing tool made it easy to quickly test various scenarios without an Alexa-enabled device. Once the skill was operating as expected (and desired) in the test pane, Stephen asked other people to test the Shark Week skill on Alexa-enabled devices. This allowed them to collect additional feedback and iterate accordingly.

Overall, the entire process of learning these new technologies, coding, and building the skill took no more than 12 hours. This included a few iterations of the Alexa skill as well.

Five Tips for Other Alexa Developers

Tip #1: Make The Skill As Human As Possible: Initially, Discovery had the Alexa voice state each of the randomized facts. In an attempt to assist with the pronunciation, they spelled a few of the words and numbers phonetically. However, in doing so, the cards displayed in the Alexa app weren't correct. It quickly became apparent that a recorded reading of each fact eliminated the pronunciation issues, enabled proper spelling of facts for the cards in the Alexa app, and made the entire experience more personal.

Tip #2: Plan for Time Sensitive Coding: If you're building time specific functionality (e.g.; a countdown timer to a specific time), make sure you think about what happens when the specific time arrives. The team at Discovery was able to account for the Shark Week kickoff by providing three different countdown messages based on time in each specific time zone. The first was the countdown lead in, the second was a message indicating that Shark Week already started, and the third indicated that Shark Week had concluded and that the Shark Week website provides other shark-related information year-round.

Tip #3: Control for Volume: If you're using a combination of recordings and Alexa powered speech, make sure the volume levels are consistent throughout the experience.

Tip #4: Be Creative with Your Intent Schema and Utterances: People think, act, and speak differently. Therefore, it's important that you account for as many different intents as possible. For example, after you ask for a Shark Week fact, the skill will ask if you would like to hear another. Just a few of Discovery’s "no" utterances include "no," "nope," "no thanks," "no thank you," "not really," "definitely not," "no way," "nah," negative," "no sir," "maybe another time," and many more. It's better to be as inclusive as possible, rather than having Alexa unable to understand.

Tip #5: Take Chances: Push your limits and think big when it comes to building your Alexa skill. Discovery started the project with a broad scope in mind and were able to quickly iterate and resubmit the skill for certification.

Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.

Get Started with Alexa Skills Kit

Are you ready to build your first (or next) Alexa skill? Build a custom skill or use one of our easy tutorials to get started quickly.

The Amazon Alexa team has collaborated with Big Nerd Ranch, known globally for its highly effective immersive development bootcamps and app development services, to develop deep technical training courses for the Alexa Skills Kit. Today we launch a new developer education experience that showcases all the free learning materials created in collaboration with Big Nerd Ranch.

Our six educational modules will dive into building voice user interfaces using the Alexa Skills Kit. The training materials will teach you about the Alexa skill architecture and interface configuration, slots and utterances, sessions and voice user interfaces, persistence, account linking, and certification and testing.

Each module page features a variety of learning materials:

Short and sweet videos you can easily share, save for later, or add to your own playlist on YouTube,

Learning objectives that summarize what you will learn,

Reference links to find more about Alexa Skills Kit features and technologies covered in the training,

Hello, my name is Michael Palermo, and I recently joined the Alexa team as the first dedicated evangelist for smart home. When friends and acquaintances ask what I do, they often looked puzzled before I get past my title. Inevitably I get questions like: What is a “smart home”? Who or what is Alexa? Why are you called an evangelist?

In this post, I’ll answer a lot of these questions. Granted, you may already be familiar with some of the topics, but stay tuned as I will also provide additional insights as to why it might matter to you.

What is a “Smart Home”?

The term “smart home” or “Connected Home” (CoHo) refers to a residence consisting of one or more smart products which enhance the living experience with benefits such as convenience, control, and optimization of resources. A product is deemed “smart” when it is capable of communicating with other smart products and/or a user interface to manage it.

What is Alexa?

From a consumer perspective, a more familiar brand name is Echo. Alexa is the voice service that powers Echo and other similar devices like Amazon Tap and Echo Dot. With Alexa, developers can build new voice experiences with the Alexa Skills Kit (ASK) or by adding voice to connected devices with Alexa Voice Service (AVS).

Today we’re happy to announce the new alexa-sdk for Node.js to help you build skills faster and with less complexity. Creating an Alexa skill using the Alexa Skills Kit, Node.js and AWS Lambda has become one of the most popular ways we see skills created today. The event-driven, non-blocking I/O model of Node.js is well suited for an Alexa skill and Node.js is one of the largest ecosystems of open source libraries in the world. Plus, with AWS Lambda is free for the first one million calls per month, which can support skill hosting for most developers. And you don’t need to manage any SSL certificates when using AWS Lambda (since the Alexa Skills Kit is a trusted trigger).

While setting up an Alexa skill using AWS Lambda, Node.js and the Alexa Skills Kit has been a simple process, the actual amount of code you have had to write has not. We have seen a large amount of time spent in Alexa skills on handling session attributes, skill state persistence, response building and behavior modeling. With that in mind the Alexa team set out to build an Alexa Skills Kit SDK specifically for Node.js that will help you avoid common hang-ups and focus on your skill’s logic instead of boiler plate code.

Enabling Faster Alexa Skill Development with the Alexa Skills Kit for Node.js (alexa-sdk)

With the new alexa-sdk, our goal is to help you build skills faster while allowing you to avoid unneeded complexity. Today, we are launching the SDK with the following capabilities:

Helper events for new sessions and unhandled events that can act as a ‘catch-all’ events

Helper functions to build state-machine based Intent handling

This makes it possible to define different event handlers based on the current state of the skill

Simple configuration to enable attribute persistence with DynamoDB

All speech output is automatically wrapped as SSML

Lambda event and context objects are fully available via this.event and this.contextAbility to override built-in functions giving you more flexibility on how you manage state or build responses. For example, saving state attributes to AWS S3.

Today, we’re excited to announce a new Alexa skills course available on Pluralsight, a global leader in online learning for technology professionals. The new course is focused on building custom Alexa skills in C# and ASP.NET Web API. In this four-module course, “Developing Alexa Skills for Amazon Echo”, Alexa developer and Pluralsight author Walter Quesada teaches the foundations of developing voice experiences for Amazon Echo and other Alexa-enabled devices. First, you'll learn the differences between Echo and Alexa, as well as the differences between the Alexa Voice Service and the Alexa Skills Kit. Next, you will quickly evaluate the 'Hello World' node.js sample code provided by Amazon. Finally, you will learn the certification process and requirements, publication stages, and how to create new versions of live skills. By the end of this course, you'll be better prepared to build and publish Alexa skills, or capabilities, for Alexa, the voice service that powers Echo.

“I’m excited for developers in the Pluralsight community to watch this first ever course on developing Alexa skills in C# and .NET. I can’t wait to see what you build. Let me know in the Pluralsight discussion forums.” – Walter Quesada, Pluralsight author

Watch Alexa Skills Kit Webinar by Alexa Evangelist, Dave Isbitski

If you need more information about Alexa before getting started, Dave Isbitski, Chief Evangelist for Alexa and Echo, has got you covered. In this exclusive webinar created for Pluralsight, Dave will walk you through the world of Alexa Skills Kit and how you can create your own voice-driven experience. The webinar starts by diving into the basics of Alexa, the SDKs, and resources to get started. Next, you’ll learn how to build an Alexa skill quickly by walking through code and interaction models.

Craig Johnson, president of Emerson’s Residential Solutions business, claims it was inevitable. “Thermostats are no longer just passive HVAC controllers hanging on your wall. The convergence of wireless and mobile technologies allowed us to develop a thermostat that allows better temperature control, programmability and scheduling, as well as remote access.”

Even before Amazon’s Smart Home Skill API was publicly released, Johnson was excited about smart home. Prior to Smart Home, Emerson had a fully functional mobile app and internet portal our customers could use to control their Sensi thermostat remotely. But integration of Alexa is a natural extension of that remote access and remote functionality.”

In February 2016, Johnson’s software development manager, Joe Mahari, jumped on board the Smart Home beta program. In just four weeks’ time—and by the time Amazon officially launched the Smart Home Skill API—Mahari’s team had built and tested its Sensi Smart Home skill and passed certification.

The Smart Home Skill API converts a voice command, such as “Alexa, increase my first floor by 2 degrees,” to directives (JSON messages). The directive includes:

the action (“increase”)

the device ID representing the thermostat named “first floor”)

any options (such as “2 degrees”), and

the device owner’s authentication information.

It then sends the directive to the methods implemented in the Sensi skill.

According to Mahari, Emerson implemented three main directives. Examples of these are:

The Emerson team agrees the skill and API were well packaged and supported, end-to-end. “Amazon defined the use case very crisply,” said Johnson. “We received a deck of scenarios to achieve, plus integrated logging, systems’ checks and documentation. These were essential to our success.”

Mahari says it was invaluable that the Amazon team connected with them daily. “For example, we had some concerns about how to increase or decrease the temperature during auto-schedules. But working directly with the Alexa team, we figured out how to make it work.”

So, if working with Amazon’s support and the API itself went so smoothly, what were some challenges the Emerson team faced over the four-week project?

Last month we released the first two videos in the Alexa video series created by developer education company Big Nerd Ranch. You can find parts 1 and 2 on the official YouTube Alexa Developers channel. Today we are excited to reveal the next two videos in the Big Nerd Ranch series on how to develop Alexa skills locally with Node.js.

In part 3 of 6, “Sessions and Voice User Interfaces”, we will learn about user sessions. This feature allows an Alexa skill to break more complicated data requirements into a series of steps spanning multiple requests to the skill service. We’ll also learn about Amazon’s voice user interface requirements. Following these requirements is important for getting a skill certified for public availability in the Alexa app. Lastly, we’ll introduce home cards. Cards are a graphical user interface element that can be sent from a skill to the Alexa app.

In part 4 of 6, “Persistence”, we will discuss how to link an Alexa skill with a database so that it can save an unfinished user interaction for later use in another session. Having the ability to persist data between Alexa sessions opens the door for far more versatile and sophisticated skills. We will see how to use Amazon DynamoDB to easily read and write data from an AWS Lambda function skill. We will use a library called Dynasty to interact with Amazon DynamoDB and handle asynchronous results more easily and elegantly.

Stay tuned for the last two videos from Big Nerd Ranch later this month.

A year ago we launched the Alexa Skills Kit to allow developers to build new voice capabilities, called skills, for Alexa. Since then, we’ve seen many Alexa developers start independent meetup groups in their local communities. The purpose of these groups are to network with other Alexa enthusiasts, share Alexa skill development knowledge, and build great voice user experiences.

We’ve curated a list of upcoming community-run Alexa meetups and local groups you can join. Thank you to the community leaders who volunteer their time to organize these local events and continue to contribute to the vibrant Alexa developer community.