Amazon Developer BlogsAmazon Developer Blogshttps://developer.amazon.com2018-08-14T16:49:56+00:00Apache Roller Weblogger/blogs/alexa/post/ab439178-df35-4081-9826-83e84e87a1c2/alexa-skill-ewe-smart-living-steuert-das-smarte-zuhauseAlexa Skill EWE smart living steuert das smarte ZuhauseKristin Fritsche2018-08-14T08:00:00+00:002018-08-14T16:49:56+00:00<p>&nbsp;<img alt="ewe_blog_post.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/ewe_blog_post._CB470011891_.png?t=true" /></p>
<p>Immer mehr Ger&auml;te, die wir t&auml;glich zuhause benutzen, k&ouml;nnen mittlerweile intelligent gesteuert werden. Der Alexa Skill von EWE hilft dabei das smarte Zuhause zu steuern.</p><p><img alt="ewe_blog_post.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/ewe_blog_post._CB470011891_.png?t=true" /></p>
<p>Immer mehr Ger&auml;te, die wir t&auml;glich zuhause benutzen, k&ouml;nnen mittlerweile intelligent gesteuert werden. Mit Smart Home-f&auml;higen Produkten kann man zum Beispiel Lampen, elektrische Ger&auml;te oder Heizk&ouml;rper- und Raumthermostate bequem kontrollieren.</p>
<p>Der Energieversorger EWE bietet seinen Kunden seit 2015 Smart Home Produkte unter dem Namen EWE smart living an. Der n&auml;chste Schritt war die Bedienung der Ger&auml;te zu vereinfachen. „Sprache ist der einfachste Weg, um etwas zu bedienen. Egal, wo man sich befindet. So entstand die Idee f&uuml;r den <a href="https://www.amazon.de/EWE-Aktiengesellschaft-smart-living/dp/B077MNSR8S/ref=sr_1_1?s=digital-skills&amp;ie=UTF8&amp;qid=1527522935&amp;sr=1-1&amp;keywords=ewe+smart+living" target="_blank">EWE smart living</a> Skill“, sagt Ann-Kathrin Weinert, Produktmanagerin für smart living bei EWE.</p>
<p>Zur technischen Umsetzung hat sich EWE die Entwickler der <a href="https://sovanta.com/" target="_blank">Sovanta AG</a> als Unterst&uuml;tzung geholt. Softwareentwickler Max Stark erinnert sich an seinen ersten Alexa Skill sehr gut: “Wir haben uns zuerst mit der Alexa Technologie besch&auml;ftigt und geschaut, was f&uuml;r einen Smart Home Skill m&ouml;glich ist. Dann haben wir gemeinsam mit EWE &uuml;berlegt, was wir den Nutzern anbieten wollen.” Dazu diente dem Team um Max Stark die <a href="https://developer.amazon.com/de/docs/ask-overviews/build-skills-with-the-alexa-skills-kit.html#" target="_blank">Alexa Dokumentation</a>.</p>
<h2>Smart Home API</h2>
<p>Die Entscheidung fiel dann auf die beiden Use Cases Licht- und Thermostat-Steuerung. „Als n&auml;chstes haben wir uns in die <a href="https://developer.amazon.com/de/docs/smarthome/understand-the-smart-home-skill-api.html" target="_blank">Smart Home API</a> eingearbeitet, die im Alexa Skills Kit enthalten ist. Wir haben die Befehle, die wir in der App nutzen, an die Anforderungen von Alexa angepasst. Die Verwaltung der Smart Home-Ger&auml;te unterschied sich von unserer App und die Befehlsstruktur ist in der API vorgegeben. Daran haben wir uns angepasst“, erz&auml;hlt Max.&nbsp;</p>
<p>Eine wichtige Funktion ist das Account Linking, ohne die der Skill nicht funktionieren w&uuml;rde. Dazu mussten die Entwickler das <a href="https://developer.amazon.com/de/docs/smarthome/steps-to-build-a-smart-home-skill.html#provide-account-linking-information" target="_blank">Authentifizierungssystem auf OAuth2</a> umstellen, damit Alexa sich mit dem EWE System autorisiert verbinden kann.</p>
<h2>Bearbeitung der Requests</h2>
<p>Eine Herausforderung war, dass ein Request von Alexa innerhalb von 8 Sekunden beantwortet werden muss. Das kann problematisch sein, da der Request von der Alexa Cloud &uuml;ber die EWE-Cloud bis zum Kunden nach Hause geleitet werden muss. Dort muss das entsprechende Smart-Home Ger&auml;t per Funk (Zwave) abgefragt werden und die Antwort muss wiederum vom Netzwerk des Kunden zur&uuml;ck zur EWE-Cloud und anschlie&szlig;end zu Alexa gelangen.</p>
<p>„Um diese Aufgaben in 8 Sekunden zu schaffen, haben wir verschiedene technische Optimierungen vornehmen m&uuml;ssen. Um den Alexa-Request in ein Datenformat zu bringen, das die EWE Smart-Home Ger&auml;te versteht, m&uuml;ssten wir eigentlich im Backend mehrere HTTP-Requests zwischen den Backend-Services einbauen. Solche Requests dauern aber zu lange“, berichtet Max und erz&auml;hlt weiter: „Eine Alternative ist das Hinzuf&uuml;gen von benutzerdefinierten Werten (Key-Value Paaren) zu dem sogenannten Cookie-Feld bei der Alexa Ger&auml;teerkennung (Discovery). Eigentlich w&auml;ren diese Daten f&uuml;r Alexa nicht n&ouml;tig, um den Skill auszuf&uuml;hren aber die Daten k&ouml;nnen so innerhalb des Skills abgefragt werden. So sparen wir uns den HTTP-Request.“</p>
<h2>Testen in der Developer Konsole</h2>
<p>Bereits w&auml;hrend der Entwicklung haben Max und sein Team den Skill fortlaufend in der Alexa Developer Konsole <a href="https://developer.amazon.com/de/docs/devconsole/test-your-skill.html" target="_blank">getestet</a>: „Es motiviert einfach unglaublich, dass ein Skill schon nach ein paar Tagen Arbeit funktioniert, auch wenn alles erst einmal sehr rudiment&auml;r implementiert ist“, erz&auml;hlt Max.</p>
<h2>Mit Nutzerfeedback arbeiten</h2>
<p>In der Developer Konsole sieht man au&szlig;erdem, wie Nutzer mit dem Skill interagieren und welche Utterances oft gebraucht werden. „Wir bekommen sehr viel n&uuml;tzliches Feedback von unseren Nutzern. An eine Sache erinnere ich mich besonders: die Nutzer haben bem&auml;ngelt, dass man die Smart Home-Ger&auml;te im Skill nicht selbst benennen konnte. So muss man vorgeschriebene Namen benutzen und die sind oft ganz anders, als es f&uuml;r den Nutzer sinnvoll oder nat&uuml;rlich erscheint. Daraufhin haben wir dieses Feature eingebaut. Die Ger&auml;te k&ouml;nnen jetzt vom Nutzer nach Belieben benannt werden“, sagt Max.</p>
<h2>Tipps vom Entwickler f&uuml;r Entwickler</h2>
<p>Ein paar Tipps hat Max auch f&uuml;r seine Entwickler-Kollegen: „Die Doku ist sehr ausf&uuml;hrlich und sollte unbedingt genutzt werden. Au&szlig;erdem macht es Sinn, sich mit anderen Skill-Entwicklern beispielsweise im <a href="https://forums.developer.amazon.com/spaces/23/index.html" target="_blank">Alexa Forum</a> auszutauschen. Testen ist sehr wichtig, am besten mit externen Nutzern aus eurer Zielgruppe. Aus dem Feedback kann man viel lernen, um den Skill noch weiter zu verbessern“, r&auml;t Max.</p>
<p>Alexa entwickelt sich st&auml;ndig weiter und so haben auch Ann-Kathrin und Max bereits Pl&auml;ne f&uuml;r den EWE smart living Skill. Der n&auml;chste Schritt wird aus dem Feedback der Nutzer abgeleitet und wie die Weiterentwicklungen bei Alexa aussehen, z. B. kann das Ausl&ouml;sen von Szenarien mittels Sprache ein neues Feature werden.</p>
<h2>Ressourcen</h2>
<ul>
<li><a href="https://www.amazon.de/EWE-Aktiengesellschaft-smart-living/dp/B077MNSR8S/ref=sr_1_1?s=digital-skills&amp;ie=UTF8&amp;qid=1527522935&amp;sr=1-1&amp;keywords=ewe+smart+living" target="_blank">Alexa Skill EWE smart living</a></li>
<li><a href="https://developer.amazon.com/de/docs/ask-overviews/build-skills-with-the-alexa-skills-kit.html#" target="_blank">Alexa Entwickler Dokumentation</a></li>
</ul>
<h2>Skill entwickeln, Entwickler-Goodie erhalten</h2>
<p>Verwirkliche deinen Alexa Skill und mach mit bei unserer <a href="https://developer.amazon.com/de/alexa-skills-kit/alexa-developer-skill-promotion?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Visit&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Visit_DE_DEDevs&amp;sc_segment=DEDevs">Entwickler-Aktion</a>. Alle Entwickler mit Wohnsitz in Deutschland, &Ouml;sterreich und Luxemburg, die zwischen dem 1. und dem 31. August 2018 einen deutschsprachigen Alexa Skill entwickeln, im Skill Store ver&ouml;ffentlichen und die <a href="https://developer.amazon.com/de/alexa-skills-kit/alexa-developer-skill-promotion?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Visit&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Visit_DE_DEDevs&amp;sc_segment=DEDevs">Teilnahmebedingungen</a> erf&uuml;llen, erhalten ein Alexa Entwickler Shirt. Erreicht dein Skill in den ersten 30 Tagen nach der Ver&ouml;ffentlichung mehr als 100 Nutzer (Unique User) erh&auml;ltst du als Entwickler des Skills au&szlig;erdem einen Amazon.de Online Store-Gutschein in H&ouml;he von 50 Euro. Ein Entwickler hat dar&uuml;ber hinaus die Chance, einen Echo Spot zu gewinnen. Sobald dein Skill ver&ouml;ffentlicht ist, kannst du daf&uuml;r die Werbetrommel r&uuml;hren. <a href="https://developer.amazon.com/de/alexa-skills-kit/alexa-developer-skill-promotion?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Visit&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Visit_DE_DEDevs&amp;sc_segment=DEDevs">Leg jetzt los und entwickle deinen Skill!</a></p>/blogs/alexa/post/09bacbdd-c089-4b02-863d-6761728102ed/shrinking-machine-learning-models-for-offline-useShrinking Machine Learning Models for Offline UseLarry Hardesty2018-08-13T15:16:10+00:002018-08-13T16:30:25+00:00<p>How a new algorithm for &quot;perfect hashing&quot; enables Amazon scientists to shrink the memory footprint of machine learning models by 94%.</p><p>Last week, the Alexa Auto team <a href="https://developer.amazon.com/blogs/alexa/post/b62cbff0-9674-4166-b476-9ad4cf74e9bf/announcing-the-alexa-auto-software-development-kit-sdk" target="_blank">announced</a> the release of its new Alexa Auto Software Development Kit (SDK), enabling developers to bring Alexa functionality to in-vehicle infotainment systems.</p>
<p><img alt="In-car_Alexa.jpg" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/In-car_Alexa._CB471851985_.jpg?t=true" style="float:left; height:334px; margin:0px 10px; width:500px" />The initial release of the SDK assumes that automotive systems will have access to the cloud, where the machine-learning models that power Alexa currently reside. But in the future, we would like Alexa-enabled vehicles — and other mobile devices — to have recourse to some core functions even when they’re offline. That will mean drastically reducing the size of the underlying machine-learning models, so they can fit in local memory.</p>
<p>At the same time, third-party developers have created more than 45,000 Alexa skills, which expand on Alexa’s native capabilities, and that number is increasing daily. Even in the cloud, third-party skills are loaded into memory only when explicitly invoked by a customer request. Shrinking the underlying models would reduce load time, ensuring that&nbsp;Alexa customers continue to experience millisecond response times.</p>
<p>At this year’s Interspeech, my colleagues and I will present a <a href="https://arxiv.org/pdf/1807.07520.pdf" target="_blank">new technique</a> for compressing machine-learning models that reduces their memory footprints by 94% while leaving their performance almost unchanged. We report our results in a paper titled “Statistical Model Compression for Small-Footprint Natural Language Understanding.”</p>
<p>Alexa’s natural-language-understanding systems, which interpret free-form utterances, use several different types of machine-learning (ML) models, but they all share some common traits. One is that they learn to extract “features” — or strings of text with particular predictive value — from input utterances. An ML model trained to handle music requests, for instance, will probably become sensitized to text strings like “the Beatles”, “Elton John”, “Whitney Houston”, “Adele”, and so on. Alexa’s ML models frequently have millions of features.</p>
<p>Another common trait is that each feature has a set of associated “weights,” which determine how large a role it should play in different types of computation. The need to store multiple weights for millions of features is what makes ML models so memory intensive.</p>
<p>Our first technique for compressing an ML model&nbsp;is to <em>quantize</em> its weights. We take the total range of weights — say, -100 to 100 — and divide it into even intervals — say, -100 to -90, -90 to -80, and so on. Then we simply round each weight off to the nearest boundary value for its interval. In practice, we use 256 intervals, which allows us to represent every weight in the model with a single byte of data, with minimal effect on the network’s accuracy. This approach has the added benefit of automatically rounding low weights to zero, so they can be discarded.</p>
<p>Our other compression technique is more elegant. If an Alexa customer says, “Alexa, play ‘Yesterday,’ by the Beatles,” we want our system to pull up the weights associated with the feature “the Beatles” — not the weights associated with “Adele”, “Elton John”, and the rest. This requires a means of mapping particular features to the memory locations of the corresponding weights.</p>
<p>The standard way to perform such mappings is through <em>hashing</em>. A hash function is a mathematical function that takes arbitrary inputs and scrambles them up — hashes them — in such a way that the outputs (1) are of fixed size and (2) bear no predictable relationship to the inputs. If the output size is fixed at 16 bits, for instance, there are 65,536 possible hash values, but “Hank Williams” might map to value 1, while “Hank Williams, Jr.” maps to value 65,000.</p>
<p>Nonetheless, traditional hash functions sometimes produce <em>collisions</em>: Hank Williams, Jr. may not map to the same location as Hank Williams, but something totally arbitrary — the Bay City Rollers, say — might. In terms of runtime performance, this usually isn’t a big problem. If you hash the name “Hank Williams” and find two different sets of weights at the corresponding memory location, it doesn’t take that long to consult a metadata tag to determine which set of weights belongs to which artist.</p>
<p>In terms of memory footprint, however, this approach to collision resolution makes a substantial difference. With quantizing, the weights themselves will require just a few bytes of data; the metadata used to distinguish sets of weights could end up requiring more space in memory than the data it’s tagging.</p>
<p>We address &nbsp;this problem by using a more advanced hashing technique called <em>perfect hashing</em>, which maps a specific number of data items to the same number of memory slots but guarantees there will be no collisions. With perfect hashing, the system can simply hash a string of characters and pull up the corresponding weights — no metadata required.</p>
<p><img alt="Perfect_hash_cropped.jpg" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/Perfect_hash_cropped._CB470011001_.jpg?t=true" style="display:block; height:257px; margin-left:auto; margin-right:auto; width:500px" /></p>
<p style="text-align:center"><em><sub>Our perfect-hashing algorithm relies on a family of conventional hash functions (h1, h2, etc.). If a function in the family produces a collision-free hash, we toggle the corresponding 0 in an array to 1. Then we repeat the process with different functions and smaller arrays, until every input value has a unique hash.</sub></em><br /> &nbsp;</p>
<p>To produce a perfect hash, we assume that we have access to a family of conventional hash functions all of which produce random hashes. That is, each function in the family might hash “Hank Williams” to a different value, but that value tells you nothing about how the same function will hash any other string. In practice, we use the hash function MurmurHash, which can be seeded with a succession of different values.</p>
<p>Suppose that you have <em>N</em> input strings that you want to hash. We begin with an array of <em>N</em> 0’s. Then we apply our first hash function — call it Hash1 — to all <em>N</em> inputs. For every string that yields a unique hash value — no collisions — we change the corresponding 0 in the array to a 1.</p>
<p>Then we build a new array of 0’s, with entries for only the input strings that yielded collisions under Hash1. To those strings, we now apply a different hash function — say, Hash2 — and we again toggle the 0’s corresponding to collision-free hashes.</p>
<p>We repeat this process until every input string has a corresponding 1 in some array. Then we combine all the arrays into one giant array. The position of a 1 in the giant array indicates the unique memory location assigned to the corresponding input string.</p>
<p>Now, when the trained network receives an input, it applies Hash1 to each of the input’s substrings and, if it finds a 1 in the first array, it goes to the associated address. If it finds a 0, it applies Hash2 and repeats the process.</p>
<p>Calling successive hash functions for some inputs does incur a slight performance penalty. But it’s a penalty that’s paid only where a conventional hash function would yield a collision, anyway. In our paper, we include both a theoretical analysis and experimental results that demonstrate that this penalty is almost negligible. And it’s certainly a small price to pay for the drastic reduction in memory footprint that the method affords.</p>
<p><em>Grant Strimel is an applied scientist in the Alexa Speech group. He and colleagues will present a paper describing their work next month at Interspeech.</em></p>
<p><strong>Acknowledgments:</strong> Kanthashree Mysore Sathyendra, Stanislav Peshterliev</p>
<p><strong><a href="https://arxiv.org/pdf/1807.07520.pdf">Paper</a>:</strong> “Statistical Model Compression for Small-Footprint Natural Language Understanding”</p>
<p><strong>Related:</strong></p>
<p><a href="https://www.amazon.jobs/en/landing_pages/interspeech2018">Amazon at Interspeech</a><br /> <a href="https://blog.aboutamazon.com/amazon-ai/machine-learning-prowess-on-display">Amazon at ICML</a><br /> &nbsp;</p>/blogs/alexa/post/9dc122db-7601-4e56-b2bc-38252dd408f2/things-every-alexa-skill-should-do-provide-contextual-help-to-guide-customersThings Every Alexa Skill Should Do: Provide Contextual Help to Guide CustomersJennifer King2018-08-13T15:02:05+00:002018-08-13T15:25:34+00:00<p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog(10)._CB498456606_.png" style="height:240px; width:954px" /></p>
<p>Help is often overlooked in a skill. But when done well, it is an invaluable part of the customer experience. Learn why it's important for every skill to include contextual help to guide customers throughout the voice experience.</p><p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog(10)._CB498456606_.png" style="height:240px; width:954px" /></p>
<p><em>Editor's Note: This is an installment of our new series called </em><em><a href="https://developer.amazon.com/blogs/alexa/tag/10+Things">Things Every Alexa Skill Should Do</a></em><em>, which highlights the important features and lessons that every skill builder can use to make their skills more engaging for customers. Follow the series to learn, get inspired, and build engaging Alexa skills.</em></p>
<p>Help is often overlooked in a skill. But when done well, it is an invaluable part of the customer experience. As the skill’s creator, you often have full knowledge of what will and won’t work in your skill. But customers don’t have the same deep knowledge. They are going to ask for help from time to time. The better your help experience is, the more likely your customer will find what they are looking for.</p>
<p>Most help responses are static speech that gives the customer a couple of ideas to try. Great help responses consider what the customer is currently doing and what they’ve already tried, and then gives them contextual recommendations on how to continue.</p>
<p>Tracking your customer’s actions and responding in a way that is specific to their current state will go a long way in helping them accomplish their tasks, making your skill a reliable tool in their Alexa skill library. According to Gal Shenar, founder of <a href="https://developer.amazon.com/blogs/alexa/post/1c5f9c52-9222-4669-81be-c091c9c9c151/gal-shenar-has-cracked-the-code-when-it-comes-to-earning-alexa-developer-rewards">Stoked Skills</a> and creator of more than 30 published Alexa skills, <a href="https://developer.amazon.com/blogs/alexa/post/2cda040c-b432-493c-92b2-842cf4c7aab6/hear-it-from-a-skill-builder-4-ways-to-optimize-your-skills-for-customer-engagement">an effective way to improve the overall experience</a> is to track how each customer is doing and customize responses and reprompts to cater to their needs.</p>
<p><em>“</em>Your goal is to make sure that everyone can understand how to interact with your skill the way you intend, without boring your top customers that don’t need additional help,” says Shenar.</p>
<p>For more tips on how you can contextual help to enable customers to get the most value from your skill, check out the following resources:</p>
<ul>
<li><a href="https://developer.amazon.com/designing-for-voice/what-alexa-says/#provide-contextual-help" target="_blank">Voice Design Guide: Provide Contextual Help</a></li>
<li><a href="https://developer.amazon.com/blogs/alexa/post/b0b1cfc0-0792-4e98-aed0-25ea77f33830/tips-for-adding-contextual-help-to-your-alexa-skill">How to Add Contextual Help to Your Alexa Skill</a></li>
<li><a href="https://developer.amazon.com/blogs/alexa/post/2cda040c-b432-493c-92b2-842cf4c7aab6/hear-it-from-a-skill-builder-4-ways-to-optimize-your-skills-for-customer-engagement">Hear It from a Skill Builder: 4 Ways to Uplevel Your Skills for Customer Engagement</a></li>
</ul>
<h2>Get the Guide: 10 Things Every Alexa Skill Should Do</h2>
<p>With more than 40,000 skills in the Alexa Skills Store, we’ve learned a lot about what makes a skill great and what you can do to create incredible voice experiences for your customers. Download the complete guide about <a href="https://build.amazonalexadev.com/10_things_every_skill_should_do_v2.html?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=wb_acquisition&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_detail=Guide&amp;sc_funnel=Convert&amp;sc_country=WW&amp;sc_medium=Owned_WB_wb_acquisition_ASK_Content_Guide_Convert_WW_visitors_build_BlogChallenge&amp;sc_segment=visitors&amp;sc_place=build&amp;sc_trackingcode=Blog10ThingsSeries" target="_blank">10 Things Every Alexa Skill Should Do</a> for more tips, code samples, and best practices to build engaging skills.</p>
<h2>Build Skills, Earn Developer Perks</h2>
<p>Bring your big idea to life with Alexa and earn perks through our <a href="https://developer.amazon.com/alexa-skills-kit/alexa-developer-skill-promotion">milestone-based developer promotion</a>. US developers, publish your first Alexa skill and earn a custom Alexa developer t-shirt. Publish a skill for Alexa-enabled devices with screens and earn an Echo Spot. Publish a skill using the Gadgets Skill API and earn a 2-pack of Echo Buttons. If you're not in the US, check out our promotions in <a href="https://developer.amazon.com/alexa-skills-kit/alexa-developer-skill-promotion-canada">Canada</a>, the <a href="http://developer.amazon.com/en-gb/alexa-skills-kit/alexa-developer-skill-promotion" target="_blank">UK</a>, <a href="http://developer.amazon.com/de/alexa-skills-kit/alexa-developer-skill-promotion" target="_blank">Germany</a>, <a href="https://developer.amazon.com/ja/alexa-skills-kit/alexa-developer-skill-promotion">Japan</a>, <a href="https://developer.amazon.com/fr/alexa-skills-kit/alexa-developer-skills-promotion">France</a>, <a href="https://developer.amazon.com/alexa-skills-kit/anz/alexa-developer-skill-promotion">Australia</a>, and <a href="http://developer.amazon.com/alexa-skills-kit/alexa-developer-skill-promotion-india" target="_blank">India</a>. <a href="https://developer.amazon.com/alexa-skills-kit/alexa-developer-skill-promotion">Learn more</a> about our promotion and start building today.</p>/blogs/alexa/post/b62cbff0-9674-4166-b476-9ad4cf74e9bf/announcing-the-alexa-auto-software-development-kit-sdkAnnouncing the Alexa Auto Software Development Kit (SDK)Adam Foster2018-08-09T15:32:22+00:002018-08-09T20:36:04+00:00<p><img alt="alexa-auto-sdk-blog(1).png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/alexa-auto-sdk-blog(1)._CB473921275_.png?t=true" /></p>
<p>Today, we are announcing the Alexa Auto SDK to help simplify the integration of Alexa into in-vehicle infotainment systems, so that our customers can take Alexa on the road.</p><p><img alt="alexa-auto-sdk-blog(1).png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/alexa-auto-sdk-blog(1)._CB473921275_.png?t=true" /></p>
<p style="margin-left:0in; margin-right:0in">At Amazon, we believe voice is the best way to interact with devices in nearly every setting. Our vision has always been for Alexa to be at your side, ready to help when you need it. We’ve already seen Alexa make customers’ lives more convenient and more productive at home and at work, and we’re excited to expand those capabilities so customers never have walk out the door without her. Today, we are announcing the Alexa Auto SDK to help simplify the integration of Alexa into in-vehicle infotainment systems, so that our customers can take Alexa on the road.</p>
<p style="margin-left:0in; margin-right:0in"><a href="https://developer.amazon.com/alexa-voice-service/alexa-auto-sdk" target="_blank">Get the Alexa Auto SDK &raquo;</a></p>
<h2>What does the Alexa Auto SDK include?</h2>
<p>The Alexa Auto SDK simplifies the integration of Alexa into in-vehicle infotainment systems. The SDK brings the Alexa experience that has delighted customers at home into the vehicle. It adds automotive-specific functionality and contextualizes the experience for the vehicle. It includes source code and function libraries in C++ and Java that enable your vehicle to process audio inputs and triggers, establish a connection with Alexa, and handle all Alexa interactions. It also includes sample applications, build scripts, sequence diagrams and documentation – supporting both Android and QNX operating systems on ARM and x86 processor architectures.</p>
<h2>Alexa Auto SDK capabilities</h2>
<p>The Alexa Auto SDK includes core Alexa functionality, such as speech recognition and synthesis, and other capabilities such as streaming media, controlling smart home devices, notifications, weather reports, and tens of thousands of custom skills. Additionally, the SDK provides the hooks required to connect to a wake word engine, local media player, local phone, and local navigation system.</p>
<h3>Calling</h3>
<p>Enable customers to specify a contact name or phone number and Alexa will instruct the native calling service in the vehicle to place the call.</p>
<h3>Media streaming</h3>
<p>Enable customers to stream audio from popular media services, such as Amazon Music, Audible, and iHeartRadio, and display album art and other info to the head unit.</p>
<h3>Navigation</h3>
<p>Enable customers to set the destination of the native turn-by-turn navigation system by specifying an address or point-of-interest and cancel navigation when the user does not need it anymore.</p>
<h3>Local search</h3>
<p>Enable customers to search for restaurants, movie theaters, grocery stores, hotels, and other business, and navigate to the location.</p>
<h2>Start Developing Today</h2>
<ul>
<li>Learn more about the functionality by reading the <a href="https://github.com/alexa/aac-sdk/" target="_blank">Alexa Auto SDK documentation</a></li>
<li>Get the Alexa Auto SDK from <a href="https://github.com/alexa/aac-sdk/tree/master/builder" target="_blank">Github</a>&nbsp;to start integrating Alexa into vehicles</li>
</ul>
<h2>What is Alexa?</h2>
<p>Alexa is a cloud-based service that powers devices like Amazon Echo, Echo Show, Echo Plus, Echo Spot, Echo Dot, and more. The Alexa service is always getting smarter, both for features, and for natural language understanding and accuracy. Because Alexa’s brains are in the AWS cloud, she continually learns and adds more functionality, every hour, every day.</p>/blogs/alexa/post/87800b0b-6ba5-4f4d-a852-25fd985a9e54/announcing-the-alexa-auto-sdk-japanThe Alexa Auto Software Development Kit (SDK) Now Available 一 般公開のお知らせTed Karczewski2018-08-09T15:31:57+00:002018-08-09T15:31:57+00:00<p><img alt="alexa-auto-sdk-blog(1).png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/alexa-auto-sdk-blog(1)._CB473921275_.png?t=true" /></p>
<p>本日、私たちは車のインフォテイメントシステムにAlexaを容易に搭載できるようサポートする、Alexa Auto SDKの一般公開を発表しました。これによりお客様は車の中でもAlexaを利用する事ができます。</p><p><img alt="alexa-auto-sdk-blog(1).png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/alexa-auto-sdk-blog(1)._CB473921275_.png?t=true" /></p>
<p>Amazonはクラウドベースの音声サービス、Amazon Alexaを通じて様々なデバイスをコントロールできる事が理想と考えています。Alexaは常にお客様の側にいて必要な時にサポートをするというのが、私たちがAlexaに対して持っているビジョンです。既にAlexaはお客様の様々な生活シーンやビジネスの中で利便性や生産性を高めています。そして、私たちはAlexaをより広い範囲で提供し、お客様が家の外でもAlexaを使えるようにしていきます。本日、私たちは車のインフォテイメントシステムにAlexaを容易に搭載できるようサポートする、Alexa Auto SDKの一般公開を発表しました。これによりお客様は車の中でもAlexaを利用する事ができます。</p>
<p><a href="https://developer.amazon.com/alexa-voice-service/alexa-auto-sdk">さっそくAlexa Auto SDKを確認する &raquo;</a></p>
<h2><strong>Alexa Auto SDKに含まれているもの</strong></h2>
<p>Alexa Auto SDKは車のインフォテイメントシステムにAlexaを容易に搭載できるよう、設計されています。このSDKを通じ、既にお客様が便利で楽しいと感じている家の中でのAlexaの体験を車の中でも実現できるようになります。また、車内での利用を最適化できるよう、オートモーティオブ向けの機能も追加しています。SDKには、音声データ処理やクラウド側で提供しているAlexaサービスとの接続など、車からAlexaクラウドサービスを利用するために必要となるソースコードと関数ライブラリー（C++, Java）を含んでいます。サンプルアプリケーション、ビルドスクリプト、シーケンス図、ドキュメントもSDKに含んでおり、オペレーティングシステムはAndroidとQNXの両方及びプロセッサアーキテクチャはARMとx86をサポートしています。</p>
<h2><strong>Alexa Auto SDKの機能</strong></h2>
<p>Alexa Auto SDKは、Alexaの主要な機能である音声認識と音声合成をはじめ、メディアストリーミング、スマートホームデバイスの制御*<sup>1</sup>、音声・ビデオ通話、メッセージ、呼びかけ機能*<sup>2</sup>、天気情報、そして世界では4万以上、日本では1,000以上のカスタムスキル（2018年7月時点）を利用するのに必要な機能を含んでいます。それに加え、このSDKはウェイクワードのエンジンや車内のメディアプレイヤーや電話、ナビシステムと接続する機能も備えています。</p>
<p><sup>*1&nbsp; スマートホームの実現には、対応製品を購入する必要があります。また、製品によっては別途接続するためのハブ（別売）が必要となる場合があります。</sup></p>
<p><sup>*2&nbsp; 音声・ビデオ通話、メッセージ、呼びかけ機能は現在日本では提供されておりませんが、今後対応予定です。</sup></p>
<h3>コーリング</h3>
<p>Alexaの音声サービスを利用して、車両に接続した携帯電話から指定した連絡先や電話番号に電話をかけたり、Alexaの呼びかけ機能を通じて、他のEchoデバイスに話しかける事ができる予定です<sup>*3</sup>。</p>
<p><sup>*3 &nbsp;現在、Echoシリーズ上での本機能は日本においては提供しておりません。今後提供予定です。</sup></p>
<h3>メディアストリーミング</h3>
<p>Amazon Musicを始めとする人気の音楽ストリーミングサービス*<sup>4</sup>を利用した音楽再生やKindle本*<sup>5</sup>の読み上げが可能になる予定です*<sup>6</sup>。</p>
<p><sup>*4&nbsp; 各サービスの利用には、別途登録・契約や料金が必要な場合があります。</sup></p>
<p><sup>*5&nbsp; 絵本や写真集、マンガ、グラフィックノベル、アダルト商品などは対象外です。</sup></p>
<p><sup>*6&nbsp; 読み上げ機能に対応している本はAlexaアプリまたはKindleストアの商品詳細ページで確認できます。</sup></p>
<h3>ナビゲーション</h3>
<p>ナビゲーションに設定する目的地の住所や特定のポイントの指定や、案内中のナビゲーションのキャンセルをAlexaの音声サービスを通して行う事が可能になる予定です<sup>*7</sup>。</p>
<p><sup>*7&nbsp; 現在、本機能は日本においては提供しておりません。</sup></p>
<h3>ローカルサーチ</h3>
<p>Alexaの音声サービスを利用して、近所のお店やコンビニエンスストアの情報、映画の上映時間を検索し、特定のロケーションにナビ設定をする事が可能になる予定です<sup>*8</sup>。</p>
<p><sup>*8&nbsp; 現在、本機能は日本においては提供しておりません。</sup></p>
<h2>さっそく開発を開始する</h2>
<ul>
<li>機能の詳細については<a href="https://github.com/alexa/aac-sdk/">SDKドキュメント</a>をご覧ください。</li>
<li>Alexa Auto SDKを<a href="https://github.com/alexa/aac-sdk/tree/master/builder">GitHub</a>からダウンロードして、Alexaを車に搭載しましょう。</li>
</ul>
<h2>Alexaとは?</h2>
<p>Alexaは、クラウドベースの音声サービスです。Amazon EchoやEcho Spotなどを始めとするAlexa搭載端末を通じて音声サービスを提供します。Alexaは機能面をはじめ音声認識や自然言語処理の正確性を常に高めて、AWSクラウドを活用する事で、継続的に学習して日々進化しています。</p>/blogs/alexa/post/cef2f1dc-92c6-499b-8b27-966c7bbfa26d/alexa-skills-kit-expands-to-mexico1Alexa Skills Kit Expands to Mexico; Alexa Voice Service Coming Later This Year | Alexa Skills Kit llega a M&eacute;xico; Alexa Voice Service llegar&aacute; m&aacute;s tarde este a&ntilde;oJennifer King2018-08-09T02:03:05+00:002018-08-09T02:59:38+00:00<p><a href="http://developer.amazon.com/es-mx/alexa-skills-kit/"><img alt="ASK_MX-launch_blog.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/AlexaSkillsKit/ASK_MX-launch_blog._CB471578423_.png?t=true" style="display:block; margin:10px auto" /></a>Today, we’re excited to announce that developers can start building voice experiences for customers in <a href="https://developer.amazon.com/es-mx/alexa">Mexico</a> using the <a href="https://developer.amazon.com/es-mx/alexa-skills-kit/">Alexa Skills Kit</a>.</p><p><a href="http://developer.amazon.com/es-mx/alexa-skills-kit/"><img alt="ASK_MX-launch_blog.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/AlexaSkillsKit/ASK_MX-launch_blog._CB471578423_.png?t=true" style="display:block; margin:10px auto" /></a>Today, we’re excited to announce that developers can start building voice experiences for customers in <a href="https://developer.amazon.com/es-mx/alexa">Mexico</a> using the <a href="https://developer.amazon.com/es-mx/alexa-skills-kit/">Alexa Skills Kit</a>. Skills that developers create now and are certified for publication will be available for customers when Alexa launches in Mexico later this year. Commercial hardware manufacturers who want to develop Alexa-enabled products for Mexican customers can request early access to the <a href="https://developer.amazon.com/alexa-voice-service/international/">invite-only</a> Alexa Voice Service developer preview. Along with the Echo family of devices, Sonos and Bose will bring Alexa-enabled products to Mexico.</p>
<hr />
<p>Nos complace anunciar que a partir de hoy es posible ofrecer aplicaciones de voz a los clientes de <a href="https://developer.amazon.com/es-mx/alexa">M&eacute;xico</a> utilizando el <a href="https://developer.amazon.com/es-mx/alexa-skills-kit/">Alexa Skills Kit</a>. Las skills que los desarrolladores crear&aacute;n hoy y que est&eacute;n certificadas para publicaci&oacute;n estar&aacute;n disponibles para los clientes cuando Alexa se lance en M&eacute;xico este a&ntilde;o. Los fabricantes de hardware comercial que quieran desarrollar productos con Alexa integrada para clientes mexicanos pueden solicitar <a href="https://developer.amazon.com/alexa-voice-service/international/">acceso anticipado al programa Alexa Voice Service developer preview</a>. Sonos y Bose tambi&eacute;n traer&aacute;n dispositivos con Alexa integrada junto con la familia de dispositivos Echo.</p>
<h2>Desarrolla nuevas skills para Alexa con el Alexa Skills Kit</h2>
<p>El ASK es una colecci&oacute;n de APIs y herramientas&nbsp;que facilitan y agilizan la creaci&oacute;n de nuevas aplicaciones controladas por voz, tambi&eacute;n llamadas “skills”, para Alexa. Los desarrolladores no necesitan experiencia en el reconocimiento de voz ni en la comprensi&oacute;n del lenguaje natural para desarrollar una skill; Alexa se encarga de escuchar, comprender y procesar la solicitud del cliente para que los desarrolladores s&oacute;lo tengan que enfocarse en el dise&ntilde;o de la skill.</p>
<h2>C&oacute;mo dise&ntilde;ar skills de Alexa para clientes de todo el mundo</h2>
<p>Comenzar a crear aplicaciones de voz para Alexa es f&aacute;cil. Explora nuestros sencillos <a href="https://developer.amazon.com/alexa-skills-kit/tutorials">tutoriales</a> o toma uno de los <a href="https://developer.amazon.com/alexa-skills-kit/webinars">seminarios</a> web disponibles bajo demanda para aprender a desarrollar una skill r&aacute;pidamente. Si quieres crear una skill multi-lenguaje de Alexa, lee nuestra <a href="https://developer.amazon.com/docs/custom-skills/develop-skills-in-multiple-languages.html">documentaci&oacute;n t&eacute;cnica</a> en la que se explica c&oacute;mo dise&ntilde;ar una skill en todos los modelos de lenguaje disponibles, incluyendo ingl&eacute;s de Estados Unidos, India, Reino Unido, Canad&aacute; y Australia, as&iacute; como alem&aacute;n, japon&eacute;s, franc&eacute;s, italiano, espa&ntilde;ol de Espa&ntilde;a, y ahora tambi&eacute;n espa&ntilde;ol de M&eacute;xico.</p>
<h2>Actualiza tu skill para llegar a nuevos clientes</h2>
<p>Si eres un desarrollador de Alexa y quieres tener acceso a nuevos clientes de M&eacute;xico, puedes mejorar la skill que ya hayas creado, ampliando la compatibilidad con el nuevo modelo de lenguaje de espa&ntilde;ol para M&eacute;xico siguiendo <a href="https://developer.amazon.com/blogs/alexa/post/5364c3a4-100f-44cd-ae95-e63d1b2b0ada/how-to-update-your-alexa-skills-for-mexico">estos simples pasos</a>.</p>
<h2>Integra Alexa en tus dispositivos con el Alexa Voice Service</h2>
<p>El <a href="https://developer.amazon.com/alexa-voice-service">AVS</a> permite a los desarrolladores integrar Alexa directamente en sus productos, brindando la comodidad del control por voz y la inteligencia basada en la nube a cualquier dispositivo conectado. El AVS proporciona un conjunto de recursos, que incluyen APIs, kits de desarrollo de hardware, kits de desarrollo de software y documentaci&oacute;n. M&aacute;s adelante este a&ntilde;o, los fabricantes de dispositivos podr&aacute;n aprovechar estos recursos para lanzar productos con Alexa integrada en M&eacute;xico gracias al acceso al modelo de lenguaje espa&ntilde;ol para M&eacute;xico y a las skills de Alexa. Los fabricantes de dispositivos comerciales pueden solicitar acceso anticipado a nuestro programa Alexa Voice Service developer preview, disponible <a href="https://developer.amazon.com/alexa-voice-service/international/">s&oacute;lo mediante invitaci&oacute;n</a>.</p>
<h2>Asiste a nuestros Seminarios Web y Workshops para comenzar a desarrollar skills de Alexa</h2>
<p>&iquest;Necesitas ayuda para comenzar? Participa en uno de nuestros pr&oacute;ximos seminarios web o eventos. Un evangelista o un arquitecto de soluciones de Alexa resolver&aacute; las dudas que puedas tener. Estamos ansiosos de ver lo que est&aacute;s desarrollando.</p>
<ul>
<li>[Seminario web]<strong> Diferencias entre programar para voz y programar para una pantalla</strong> – <a href="https://build.amazonalexadev.com/diferencias-entre-programar-para-voz-webinar-registration-mx.html" target="_blank">22 de agosto – 14:00</a></li>
<li>[Seminario web]<strong> Desarrolla tu primera experiencia de voz en espa&ntilde;ol con Amazon Alexa </strong>– <a href="https://build.amazonalexadev.com/webinar-mx-desarrolla.html" target="_blank">29 de agosto 14:00 </a><strong> </strong></li>
<li>[Evento] <strong>Workshop (Ciudad de M&eacute;xico</strong>) – <a href="https://mxalexaskillsworkshop1.splashthat.com/" target="_blank">4 de septiembre</a></li>
<li>[Evento] <strong>Workshop (Guadalajara)</strong> – <a href="https://mxalexaskillsworkshop2.splashthat.com/" target="_blank">5 de septiembre</a></li>
<li>[Evento] <strong>Workshop (Monterrey)</strong> – <a href="https://mxamazonalexaskillsworkshop3.splashthat.com/" target="_blank">6 de septiembre</a></li>
<li>[Evento]<strong> Alexa Dev Days (Ciudad de M&eacute;xico) </strong>– <a href="http://alexadevday.com/MexicoCity/LaunchBlog" target="_blank">7-8 de noviembre</a></li>
</ul>
<h2>Participa en el programa Alexa Developer Preview</h2>
<p>&iquest;Quieres desarrollar una skill de Alexa? Tienes la oportunidad de participar en el programa prueba de Alexa en M&eacute;xico. Desarrolladores en M&eacute;xico que obtengan la certificaci&oacute;n para la publicaci&oacute;n de su skill en espa&ntilde;ol en M&eacute;xico y que env&iacute;en el formulario disponible <a href="http://developer.amazon.com/es-mx/alexa-skills-kit/alexa-developer-preview-program">aqu&iacute; </a>antes del 30 de septiembre, pueden ser elegibles para participar en nuestro programa prueba de Alexa en M&eacute;xico y recibir un dispositivo Echo.</p>
<p><a href="http://developer.amazon.com/es-mx/alexa-skills-kit/alexa-developer-preview-program">M&aacute;s informaci&oacute;n &raquo;</a></p>/blogs/alexa/post/5364c3a4-100f-44cd-ae95-e63d1b2b0ada/how-to-update-your-alexa-skills-for-mexicoHow to Update Your Alexa Skills for MexicoMemo Doring2018-08-09T02:00:00+00:002018-08-09T02:39:15+00:00<p>Today, we announced that Amazon Alexa and Alexa-enabled devices are coming to Mexico later this year. Starting today, you can use the <a href="http://developer.amazon.com/es/alexa-skills-kit/">Alexa Skills Kit (ASK)</a> to build skills for customers in Mexico using the new Spanish (MX) language model.</p><p>Today, we announced that Amazon Alexa and Alexa-enabled devices are coming to Mexico later this year. Starting today, you can use the <a href="http://developer.amazon.com/es/alexa-skills-kit/">Alexa Skills Kit (ASK)</a> to build skills for customers in Mexico using the new Spanish (MX) language model.</p>
<p>If you are new to skill development, check out this <a href="https://github.com/alexa/skill-sample-nodejs-fact/" target="_blank">detailed walkthrough</a> to get started. If you’re an experienced Alexa developer, you can enhance your existing skill by extending it to support the new Spanish (MX) language model. This tutorial will show you how you can add support for the Spanish (MX) model for your existing skills. It will also show you how you can use ASK to enable Alexa to respond based on locales.</p>
<p>You will learn:</p>
<ol>
<li>How to update an Alexa skill for Mexican customers using the new Spanish (MX) language model</li>
<li>How to update your AWS Lambda function so your skill delivers the right content to your customers in each of the supported regions—all from a single code base</li>
</ol>
<h2>Part 1: Add the New Language Model for Your Skill</h2>
<p>1. Navigate to your existing skill on the <a href="https://developer.amazon.com/edw/home.html#/skill/create/">Amazon Developer Portal</a>.</p>
<p>2. Click on the language drop down on the top right of the screen and select the last option: “Language Settings.”</p>
<p><a href="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/MX_Launch-TechBlog-LanguageSettings._CB1533576073_.png"><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/MX_Launch-TechBlog-LanguageSettings._CB1533576073_.png" style="height:437px; width:400px" /></a></p>
<p>3. Follow the steps below to complete the Skill Information tab:</p>
<p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/MX_Launch-TechBlog-AddNewLanguage._CB1533576256_.png" style="height:168px; width:400px" /></p>
<p><img alt="" src=" https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/MX_Launch-TechBlog-ChooseLanguage._CB1533576475_.png" style="height:298px; width:400px" /></p>
<p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/MX_Launch-TechBlog-Save._CB1533577001_.png" style="height:180px; width:400px" /></p>
<ul>
<li>Click <strong>“+ Add New Language</strong>”</li>
<li>Select “<strong>Spanish (MX)</strong>”</li>
<li>Click on <strong>Save</strong> [you will now have the Spanish (MX) language as an option in the language drop down)]</li>
</ul>
<p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/MX_Launch-TechBlog-DropDown._CB471519876_.png" style="height:358px; width:400px" /></p>
<p>4. Now you need to provide the interaction model for the Spanish (MX) version. One way of doing this is to copy the interaction model for the English versions of the skill, and translate the sample utterances, slot values and synonyms. Make sure to also change any built-in slots and intents to match the new locale. Switch to the US version by clicking on the language dropdown in the skill builder and choose English (US).</p>
<p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/MX_Launch-TechBlog-SwitchToEnglish._CB1533578049_.png" style="height:307px; width:400px" /></p>
<p>5. Click on <strong>JSON Editor</strong> on the left side bar. This displays the complete interaction model for the skill in JSON format.</p>
<p><img alt="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/CodeEditor(1)._CB475110788_.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/CodeEditor(1)._CB475110788_.png" style="height:293px; width:400px" /></p>
<p>6. Select and copy all of the JSON in the code window.</p>
<p>7. Switch back to <strong>Spanish(MX)</strong> using the dropdown from Step 4.</p>
<p>8. Click on <strong>JSON Editor</strong> again, and paste the JSON into the code window, replacing the existing JSON.</p>
<p>9. Translate all sample utterances, slot values and slot synonyms.</p>
<p>10. Click on the <strong>Save Model</strong> button.</p>
<p>11. Click on the <strong>Build Model</strong> button.</p>
<p><img alt="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/JSON_Editor(1)._CB475111234_.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/JSON_Editor(1)._CB475111234_.png" style="height:152px; width:400px" /></p>
<p>We now have the language model built for Spanish (MX). You now need to translate the invocation name, the sample utterances, the slot values and the synonyms. You also must localize the skill metadata, including skill name, description, keywords and, maybe, the icon. The skill’s metadata is available in the “Distribution” tab of the Alexa Developer Console.</p>
<p>If your interaction model uses any <a href="https://developer.amazon.com/docs/custom-skills/slot-type-reference.html">built-in slot types</a>, you may need to make changes to ensure that the types are supported in the locale. For example, the AMAZON.US_FIRST_NAME is supported in English (US), English (UK), English (Canada), and German. An equivalent first name slot type, AMAZON.FirstName, is available for Spanish, French, English (India), English (Australia) and Japanese. See the <a href="https://developer.amazon.com/docs/custom-skills/slot-type-reference.html">Slot Type Reference</a> for a list of slot types for each supported locale.</p>
<p>Once you have finished translating your interaction model for Spanish(MX), you need to customize the responses your skill returns for the different locales that you support. Do this by updating your Lambda function.</p>
<h2>Part 2: Update the Lambda Function</h2>
<p>Now that your skill is ready to support multiple regions, you will need to update your lambda function to ensure that your skill provides responses tailored to each supported region.</p>
<p>At the very least, translate the strings the skill is sending to Alexa to Spanish to be rendered with Alexa's voice. You can also use this technique to use different strings for different variations of English. For instance, you may want to greet your customers with “G’day” in Australia, “Hello” in Canada and the UK, “Namaste,” in India, &quot;Hi&quot; in the US, and “Buenos Dias” in Mexico. The Alexa Skills Kit makes that really simple. Here is an example of how you would do this with the Alexa Skills Kit SDK for Node.js, leveraging the i18next library. For brevity, in this example, all English-based languages are sharing the same set of strings.</p>
<p><strong>Step 1: Set the Language Strings for Each Region</strong></p>
<p>To do this, we define all user-facing language strings in the following format(shortened for legibility):</p>
<pre>
<code class="language-javascript">const languageString = {
en: {
translation: {
QUESTIONS: questions.QUESTIONS_EN_US,
GAME_NAME: 'Reindeer Trivia',
HELP_MESSAGE: 'I will ask you %s multiple choice questions. Respond with the number of the answer. For example, say one, two, three, or four. To start a new game at any time, say, start game. ',
},
},
de: {
translation: {
QUESTIONS: questions.QUESTIONS_DE_DE,
GAME_NAME: 'Wissenswertes &uuml;ber Rentiere in Deutsch',
HELP_MESSAGE: 'Ich stelle dir %s Multiple-Choice-Fragen. Antworte mit der Zahl, die zur richtigen Antwort geh&ouml;rt. Sage beispielsweise eins, zwei, drei oder vier. Du kannst jederzeit ein neues Spiel beginnen, sage einfach „Spiel starten“. ',
},
},
es:{
translation: {
QUESTIONS: questions.QUESTIONS_ES_MX,
GAME_NAME: 'Trivia de Renos',
HELP_MESSAGE: 'Voy a hacerte %s preguntas de opcion multiple. Responde con el numero de la respuesta, por favor. Por ejemplo, di uno, dos, tres o cuatro. Para comenzar un nuevo juego, di nuevo juego en cualquier momento.',
},
}
}</code></pre>
<p>As you can see, languageStrings object contains an object for each supported&nbsp; language English (en), German (de), and Spanish (es). The object names are identical to the value of the locale property that is passed to our skill when it is invoked by Alexa. This tells us the language model the user’s device is configured to use so that Alexa can respond with the appropriate language. If you wanted to support French and Japanese, you would add additional objects for 'fr' and 'ja' with appropriate translations.</p>
<p>You can see this in action by looking at the JSON request sent to your skill through the Service Simulator. When testing in the simulator, be sure to select the tab for the language you want to test. In our example, when testing from the Spanish(MX) language, the request sent to the skill includes the es-MX locale:</p>
<p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/MX_Launch-TechBlog-LocaleJson._CB1533582410_.png" style="height:384px; width:400px" /></p>
<p>Each language has a translations object within languageStrings. This is where we specify any properties that are different for each language. For our example, we have HELP_MESSAGE and GAME_NAME as part of the language strings. You can add more strings as you find relevant.</p>
<p><strong>Step 2: Enable Internationalization for Your Skill Using the Alexa Skills Kit SDK</strong></p>
<p>To enable string internationalization features in the current version of the ask-sdk, we will do three things:</p>
<p>1. Import an external library, we picked i18next for this example</p>
<pre>
<code class="language-javascript">const i18n = require('i18next');
const sprintf = require('i18next-sprintf-postprocessor');</code></pre>
<p>2. Create a Request Interceptor to make sure every request gets processed</p>
<pre>
<code class="language-javascript">const LocalizationInterceptor = {
process(handlerInput) {
const localizationClient = i18n.use(sprintf).init({
lng: handlerInput.requestEnvelope.request.locale,
overloadTranslationOptionHandler: sprintf.overloadTranslationOptionHandler,
resources: languageString,
returnObjects: true
});
const attributes = handlerInput.attributesManager.getRequestAttributes();
attributes.t = function (...args) {
return localizationClient.t(...args);
};
},
};</code></pre>
<p>3. Register the Interceptor</p>
<pre>
<code class="language-javascript">const skillBuilder = Alexa.SkillBuilders.custom();
exports.handler = skillBuilder
.addRequestHandlers(
LaunchRequest,
HelpIntent,
StopIntent,
CancelIntent,
)
.addRequestInterceptors(LocalizationInterceptor)
.lambda();</code></pre>
<p><strong>Step 3: Access the Language Strings in Your Code</strong></p>
<p>Since our interceptor puts all our localized strings in the Request Attributes, when you are done defining and enabling language strings, you can access these strings using the `requestAttributes.t` function. Strings will be rendered in the language that matches the locale of the incoming request.</p>
<p>That’s all that it takes to update your skill to be available for customers in Mexico. We are excited that Alexa is available in Mexico, and we can't wait to see what you build.</p>
<p>Check out our <a href="https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/developing-skills-in-multiple-languages">documentation</a> to learn more about how you can use ASK to create multi-language Alexa skills.</p>
<h2>Get Started</h2>
<p>Check out the following training resources, tutorials, and code samples to start building Alexa skills:</p>
<ul>
<li>Alexa Skill Templates and Sample Code on <a href="https://github.com/alexa?utf8=%E2%9C%93&amp;q=skill&amp;type=&amp;language=" target="_blank">GitHub</a></li>
<li><a href="https://github.com/alexa/alexa-cookbook" target="_blank">Alexa Skill-Building Cookbook</a></li>
<li><a href="https://www.codecademy.com/learn/learn-alexa" target="_blank">Alexa Skill Development Courses on Codecademy</a></li>
<li><a href="https://developer.amazon.com/alexa-skills-kit/alexa-skills-developer-training">Alexa Skills Kit Training Resources</a></li>
<li><a href="https://forums.developer.amazon.com/spaces/165/index.html" target="_blank">Alexa Developer Forums</a></li>
</ul>
<p>&nbsp;</p>/blogs/alexa/post/ec66406c-094c-4dbc-8e9f-01050b27d43d/automatic-transliteration-can-help-alexa-find-data-across-language-barriersAutomatic Transliteration Can Help Alexa Find Data Across Language BarriersLarry Hardesty2018-08-09T01:33:19+00:002018-08-10T14:53:32+00:00<p>Amazon AI researchers have publicly released a new dataset with transliterations of 400,000 names, to aid the development of systems that can search for data across languages that use different scripts.</p><p><em><sup>Steve Ash cowrote this post with Yuval Merhav.</sup></em></p>
<p>As Alexa-enabled devices continue to expand into new countries, finding information across languages that use different scripts becomes a more pressing challenge. For example, a Japanese music catalogue may contain names written in English or the various scripts used in Japanese — Kanji, Katakana, or Hiragana. When an Alexa customer, from anywhere in the world, asks for a certain song, album, or artist, we could have a mismatch between Alexa’s transcription of the request and the script used in the corresponding catalogue.&nbsp;</p>
<p><a href="https://www.youtube.com/watch?v=Ct6BUPvE2sM" target="_blank"><img alt="PPAP_illo.jpg" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/PPAP_illo._CB471576278_.jpg?t=true" style="float:left; height:468px; margin-left:10px; margin-right:10px; width:450px" /></a>To address this problem, we developed a machine-learned multilingual named-entity transliteration system. Named-entity transliteration is the process of converting a name from one language script to another. We describe the design challenges of building such a system in a <a href="https://arxiv.org/pdf/1808.02563.pdf" target="_blank">paper</a> we are presenting this month at the 27th International Conference on Computational Linguistics (<a href="http://coling2018.org/" target="_blank">COLING 2018</a>).</p>
<p>The first challenge is obtaining a large dataset that contains name pairs in different languages. Since we could not find a publicly available dataset that satisfied our needs, we created a new dataset based on <a href="https://www.wikidata.org/" target="_blank">Wikidata</a>, a central knowledge base for Wikipedia and other Wikimedia projects. We have released our dataset <a href="https://github.com/steveash/NETransliteration-COLING2018" target="_blank">online</a>, together with our code, under a Creative Commons license.</p>
<p>The Wikidata page for a given person will usually list versions of his or her name in multiple languages. We automatically collected all available pairings of English versions of names with Japanese, Hebrew, Arabic, or Russian versions. We then applied a few heuristics to filter out noisy pairs, which we detail in the paper. (We initially collected data on titles of works as well, but they too frequently involved translation, not just transliteration.)</p>
<p>In most names, the pronunciation of the last name is independent of the ﬁrst or middle names. So it makes sense to train a transliteration system on independent pairs of first names, last names, and so on.&nbsp;</p>
<p>Wikidata doesn’t include separate tags for first, middle, and last names, but there are systematic correspondences between the positions of names in different transliterations. So we wrote some scripts that use those correspondences to extract pairs of one-name transliterations. For example, the English/Russian Wikidata label pair [“Amy Winehouse”, “Эми Уайнхаус”] would produce two data instances in our training set: [“Amy”, “Эми”] and [“Winehouse”, “Уайнхаус”]. The result was a dataset containing almost 400,000 one-name pairs.</p>
<p>We then used our dataset to train several machine-learning systems, employing both traditional approaches and more recent neural approaches that have yielded strong results on machine translation tasks. We achieved the best results using the <a href="https://arxiv.org/pdf/1706.03762.pdf" target="_blank">Transformer</a>, a neural-network architecture that dispenses with some of the complexities of convolutional or recurrent networks and instead relies on attention mechanisms, which focus the network on particular aspects of the data passing through it.&nbsp;</p>
<p><img alt="transformer-diagram-1152x889.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/transformer-diagram-1152x889._CB471572976_.png?t=true" style="display:block; height:424px; margin:20px auto; width:600px" /></p>
<p style="text-align:center"><sub><em>This diagram depicts the Transformer architecture as we used it for named-entity transliteration. Unlike other neural translation architectures, the Transformer encodes the entire input simultaneously using self-attention to capture interactions between different input characters. The encoded input is decoded sequentially, one character at a time. Each predicted output character can influence subsequent predictions via the self-attention layer (bottom right), which captures interactions between output characters, and the attention layer (top), which captures interactions between input and output.</em></sub><br /> &nbsp;</p>
<p>The Transformer network outperformed an encoder-decoder recurrent neural-network architecture with attention and also a more traditional weighted-finite-state-transducer approach to sequence-to-sequence transduction based on the <a href="https://github.com/AdolfVonKleist/Phonetisaurus" target="_blank">Phonetisaurus</a> library, a data-driven, open-source toolkit for grapheme-to-phoneme conversion.&nbsp;</p>
<p>While the Transformer achieved the best results overall, there are other factors that affect performance. First, the language pair makes a significant difference. For example, the system performs significantly worse when transliterating English to Hebrew or Arabic than when transliterating English to a more similar language such as Russian.&nbsp;</p>
<p>The direction of the transliteration also plays an important role. In every case, using English as the target language (e.g., training a model to transliterate from Russian to English) results in much worse accuracy, and once again the impact varies depending on the source language. Finally, we also found that the size of the training set doesn't have a significant impact on accuracy. For all languages, we were able to reach close to optimal performance with about 50% of the training data.</p>
<p>The paper contains an error analysis section that offers insights into some of our findings, such as why transliterating into English is harder. (One explanation is that on Wikidata, as elsewhere on the Web, words in Semitic languages are written without diacritical marks, so the network has to guess at the missing vowels.)</p>
<p><em>Yuval Merhav is a machine learning scientist in the Alexa AI organization, and Steve Ash is a senior machine learning engineer in the Amazon Web Services AI organization. They will present their work at the 27th International Conference on Computational Linguistics this month.</em></p>
<p><strong><a href="https://arxiv.org/pdf/1808.02563.pdf" target="_blank">Paper</a>:</strong> “Design Challenges in Named Entity Transliteration”</p>
<p><strong>Related:</strong><br /> <br /> <a href="https://developer.amazon.com/blogs/alexa/post/7dde86fa-0a4f-4984-82d1-7a7d1282fb0c/machine-translation-accelerates-how-alexa-learns-new-languages" target="_blank">Machine Translation Accelerates How Alexa Learns New Languages</a><br /> <a href="https://blog.aboutamazon.com/amazon-ai/where-computer-science-and-linguistics-meet" target="_blank">Where computer science and linguistics meet</a><br /> <a href="https://blog.aboutamazon.com/amazon-ai/expanding-the-natural-language-processing-community" target="_blank">Expanding the Natural-Language Processing community</a><br /> <br /> <em>Photo credit: Blur Life 1975 / Shutterstock</em></p>/blogs/alexa/post/57d0bb9c-19a6-4c51-bfa2-fc6753d14b68/4-principles-of-conversational-voice-design4 Principles of Conversational Voice DesignJennifer King2018-08-09T01:00:00+00:002018-08-09T14:36:03+00:00<p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/c53f5d9ea335b8d81da9f7403b793abc564f0236062f4430b4c08fabdfca5189_64a500f8-d83f-4eba-bf6e-533225cf2cf1._CB488143019_.png" style="height:240px; width:954px" /></p>
<p>Here are some core principles that we have uncovered in conversational voice design along with some advanced design skill-building tips to create natural voice experiences.</p><p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/c53f5d9ea335b8d81da9f7403b793abc564f0236062f4430b4c08fabdfca5189_64a500f8-d83f-4eba-bf6e-533225cf2cf1._CB488143019_.png" style="height:240px; width:954px" /></p>
<p>Today’s voice-first technologies are built with <a href="https://developer.amazon.com/alexa-skills-kit/nlu">natural language understanding (NLU)</a> and <a href="https://developer.amazon.com/alexa-skills-kit/asr">automatic speech recognition (ASR)</a>, which are forms of artificial intelligence centered on recognizing patterns and meaning within human language. While technologists have been using NLU and ASR for decades, today they are more consumable and accessible to developers via tools like the <a href="https://developer.amazon.com/alexa-skills-kit/asr">Alexa Skills Kit (ASK)</a>. With ASK, you can create an Alexa skill that takes what a customer says and the sounds they make, and form requests that are handled to create meaning and serve as a response.</p>
<p>The best way to leverage this technology for conversational voice design is through experimentation and practice. That’s why the Alexa team has released its own best practices and key concepts you can leverage to create standout skills.</p>
<p>Here are some core principles that we have uncovered in conversational voice design along with some advanced design skill-building tips to create natural voice experiences.</p>
<h2>Elicit Information Using Multi-Turn Dialogs</h2>
<p>Conversation and interpreting verbal responses are two of the first things we learn. Many times when you have a conversation with someone, they’ll ask questions to gather information. They are gathering a set of information and interpreting that data. The same can be said when you are developing a skill. Before you can publish a response, you need to meet some requirements for formulating it, making sure all of the boxes are checked and all the variables are fulfilled.</p>
<p>In voice design, we like to call this a “<a href="https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html">multi-turn dialog</a>.” The conversation is tied to a specific intent representing the customer’s overall request. These questions gather slots until all information is gathered to give an appropriate response.</p>
<p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/image_1_cami._CB471576684_.png" style="display:block; margin-left:auto; margin-right:auto" /></p>
<p>As an example, let’s look at <a href="https://github.com/alexa/skill-sample-nodejs-petmatch" target="_blank">Pet Match</a>, which is a skill we built to match a customer’s dog preferences to a specific dog breed. The skill needs to learn what temperament, size, and energy level the customer wants in a pet before formulating the complete response. In your skill code, you can prompt the customer for a specific response. Those variables that need to be filled are the slot values. As Alexa continues to elicit more slots, it can request confirmation from the customer and handle any updates. When all the slots are filled, Alexa can perform the final API call to handle the customer’s ultimate request of being matched to a pet.</p>
<p>Multi-turn dialogs are an important concept due to the conversational experience it handles. If a customer is on the web and fills out a form, they might fill in a field titled “Name.” However, with voice, if Alexa just said, “Name,” a user might be confused, or even intimidated by the command. Multi-turn dialogs allow Alexa to sound more conversational. At its core, it is an approach to gathering information for a customer’s request in a way they can easily understand.</p>
<h2>Optimize Conversations with Dialog Management</h2>
<p>Multi-turn dialogs can easily become a large tree based upon the information you need to fulfill a request. The worst-case scenario is if a customer thinks the line of questioning is too long, forgets what they have already answered, or forgets what they want to say. Fortunately, ASK has capabilities for more dialog support.</p>
<p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/image_2_cami._CB471576686_.png" style="display:block; height:360px; margin-left:auto; margin-right:auto; width:600px" /></p>
<p>We talk about graph-based UI versus frame-based UI to illustrate the difference. A graph-based UI models a flow chart or decision tree metric. Not only does it open up the customer’s qualms as stated before, but for developers it is a lot to keep track of. You have to rely on the customer’s memory. The form they are filling out turn by turn can easily back-fire. Furthermore, building out a decision tree in this way is not conversational. A customer should be able to tell you the information they know they need to provide up front and at any time.</p>
<p>To resolve this, we introduced the concept of frame-based UI. With this model, there is an entrance criterion, essentially navigating to the point in the skill where you need to gather information. There is also an exit criterion, which is the information you need to gather to move on to the next part of the skill. Performing this collection of information is called “dialog management.”</p>
<p>With dialog management, a customer can provide information to Alexa at any point, regardless of what was asked. Alexa will then know to interpret the values and assign them to the appropriate slot values and elicit any remaining slots, according to the exit criteria, from the customer.</p>
<p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/image_3_cami._CB471576681_.png" style="display:block; height:359px; margin-left:auto; margin-right:auto; width:600px" /></p>
<p>With the Pet Match example, the entrance criteria is asking to be matched with a new pet. The exit criteria are gathering what size, energy, and temperament the user wants in a dog. A customer can provide this information at any point. If Alexa asks “What size of dog would you like?” and a customer responds “I want a small, family friendly dog,” Alexa will know to fill the {size} and {temperament} slots with those values respectively, and then prompt for the remaining {energy} slot to be filled. Once the exit criteria is filled, the service called the petMatchAPI() and then a response is sent to the customer with an appropriate pet match.</p>
<p>All this being said, graph-based UI is sometimes unavoidable. In practice, try to decompose portions of your graph UI to a frame UI. This will create an overall more conversational experience for your customers.</p>
<h2>Diversify Understanding with Entity Resolution</h2>
<p>When we break down ASK, the easiest way to understand what is required of a developer is to look at the dialog. You need to provide what the customer says and how Alexa responds; ASK is able to handle everything else. However, you probably can’t think of everything a customer could possibly say, nor should you. Think of your utterances as training data for Alexa. Recognize that with any training data, more is not necessarily better. Try and think of phrases and different rearrangements of a phrase and incorporate them into your skill.</p>
<p>You can use <a href="https://developer.amazon.com/docs/custom-skills/define-synonyms-and-ids-for-slot-type-values-entity-resolution.html">entity resolution</a> to accomplish this. With entity resolution, a developer can assign synonyms to slot values. When a customer uses the synonym, it can then be resolved to the default slot value and perform the same logic.</p>
<p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/image_4_cami._CB471576683_.png" style="display:block; height:211px; margin-left:auto; margin-right:auto; width:754px" /></p>
<p>When a customer uses Pet Match and Alexa prompts the customer for an energy level, they might not know exactly what they want. Instead, if they managed to say something like, “I want a dog I can run with,” Alexa should be able to interpret that phrase as a high-energy pet. Thus, the phrase “that I can run with” resolves to high energy, and Alexa can send that variable to the Pet Match API. Entity resolution can be one word or a phrase. Think outside the box about what customers might say to express an idea. For example, if they want a low-energy pet, they might talk about their lifestyle and say something like, “I don’t exercise.”</p>
<p>While entity resolution helps to handle many things a customer might say, you should formulate what Alexa says to prompt the customer for a response to follow a cadence. Be direct, as this will eliminate the need to train many phrases and slot values. If you are looking for a “low,” “medium,” or “high” resolve for energy, give the customer a choice. For example, Alexa could say, “Would you rather have a dog that is lazy or that is energetic?”</p>
<h2>Surprise and Delight Customers with Variance and Memory</h2>
<p>Memory is a concept common to most technical mediums. With voice, you can interpret the word “memory” less-so as storage or caching and more as recollection and remembrance. If we have a conversation, walk away and then you bring up the topic again moments later, it would be a bad conversational experience if I completely forgot what we were discussing and repeated exactly what I said before.</p>
<p>Conversation is different every time, in large or small ways. The same conversational principle applies to Alexa. If you have a customer who is continuously using your skill, they need not hear instructions in the opening message every time they invoke your skill. You don’t want the customer to tune out what Alexa is saying. The skill should remember their previous choices and provide variance in what Alexa says according to the customer’s usage.</p>
<p>There is, however, a lot to be said about consistency, and it can easily be lost with variance. If a customer enjoyed their experience in your skill previously, you will want to deliver the same enjoyable experience as they continue to invoke it. Think about variance as a benefit to memory. Recalling a customer’s name by saying, “Welcome back, Cami,” will be a delightful addition to the skill, but by no means change the experience.</p>
<p>Memory can be achieved within your skill via a customer’s userId. The userId can be provisioned as a primary key in your database to store user-specific contextual information from a skill session. These attributes are called <a href="https://developer.amazon.com/alexa-skills-kit/big-nerd-ranch/alexa-implementing-persistence">persistent attributes</a>.</p>
<p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/image_5_cami._CB471576677_.png" style="display:block; margin:10px auto" />Within Pet Match, the persistent attributes are the previous matches a customer has received. When the customer invokes the skill, they can either hear their previous matches or start a new search. In any case, this allows the customer to reflect on what they were previously told. The service code hosted on AWS Lambda is called DynamoDB when a new search result is performed or if a customer wants to hear their previous searches. This call is minimal in terms of latency, and gives the skill a new level of depth.</p>
<p>With these principles in hand, I hope that you can develop standout Alexa skills. But we are learning along with you; the more developers building for voice, the larger the learning and earning potential.</p>
<p>Check out these resources to find out more about the art and science of Alexa communication and conversational voice design:</p>
<ul>
<li><a href="https://developer.amazon.com/alexa-skills-kit/dialog-management">Dialog Management Resource Center</a></li>
<li><a href="https://www.codecademy.com/learn/alexa-conversational-design" target="_blank">Codecademy: Conversational Design with Alexa</a></li>
<li><a href="https://www.codecademy.com/courses/learn-alexa-persistence/lessons/dynamo-db?course_redirect=learn-alexa" target="_blank">Codecademy: Add Persistence to Your Skill</a></li>
<li><a href="https://developer.amazon.com/blogs/alexa/post/648c46a1-b491-49bc-902d-d05ecf5c65b4/tips-on-state-management-at-three-different-levels">Tips on State Management at Three Different Levels</a></li>
</ul>
<h2>Make Money by Creating Engaging Skills Customers Love</h2>
<p>You can make money through Alexa skills using <a href="https://developer.amazon.com/alexa-skills-kit/make-money/in-skill-purchasing">in-skill purchasing</a> or <a href="https://developer.amazon.com/alexa-skills-kit/make-money/amazon-pay">Amazon Pay for Alexa Skills</a>. You can also make money for eligible skills that drive some of the highest customer engagement with <a href="https://developer.amazon.com/alexa-skills-kit/rewards">Alexa Developer Rewards</a>. <a href="http://dev.amazonappservices.com/Alexa_Skill_Monetization_Guide_LP.html" target="_blank">Download our guide</a> to learn which product best meets your needs.</p>/blogs/appstore/post/b36f82e0-6e2a-4dce-a0d0-acdac6eed808/how-to-navigate-the-complexities-of-digital-card-game-designHow to Navigate the Complexities of Digital Card Game DesignEmily Esposito Fulkerson2018-08-07T21:00:00+00:002018-08-07T21:00:00+00:00<p><img alt="Missile-Cards-Blog-Banner.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AppstoreBlogs/NathanRanneyblogposts/Missile-Cards-Blog-Banner._CB496945316_.png?t=true" style="display:block; height:360px; margin-left:auto; margin-right:auto; width:900px" /><br /> I’ve created dozens of card game prototypes and am working on numerous card-based projects I plan to release in the future. From working on these projects, I’ve noted some common elements that tend to make or break a digital card game’s design.</p><p><img alt="Missile-Cards-Blog-Banner.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AppstoreBlogs/NathanRanneyblogposts/Missile-Cards-Blog-Banner._CB496945316_.png?t=true" style="display:block; height:320px; margin-left:auto; margin-right:auto; width:800px" /></p>
<p>Digital card games are a fun challenge to design because they give you a basic and familiar foundation (a deck of cards) that you can adapt to just about any genre, style of play, or conceptual theme. I based a lot of my prototypes, initially, on a single deck of 52 cards, but in the end, precious few of them wound up resembling anything close to a traditional solitaire card game.</p>
<p><a href="https://www.amazon.com/Nathan-Meunier-Missile-Cards/dp/B07CVM9S4V" target="_blank">Missile Cards</a> may be my first commercially released card game, but I’ve created dozens of card game prototypes and am working on numerous card-based projects I plan to release in the future. From working on these projects, I’ve noted some common elements that tend to make or break a digital card game’s design.</p>
<p>Here’s a look at some of the core design considerations that you need to think about when making digital card games.</p>
<h2>1. A strong theme is everything</h2>
<p>A theme is usually the first place I start when I’m thinking about a new card game project, because it quickly helps me figure sort out a visual style for my game.</p>
<p>Having a striking visual style is important, but I also find that deciding on a theme helps me come up with interesting ideas for the gameplay itself. Whether you’re making a card game about dungeon crawling, fishing, city building, sci-fi combat, or even relationship building, each potential theme holds an exciting range of possibilities for visual style and unique card mechanics.</p>
<h2>2. Figure out your visual layout early on</h2>
<p>When designing card games, I often begin mocking up visual layouts very early-on in the process, before I even start coding up the game itself. This lets me adjust card size, play around with positioning, and make important decisions about how the game might play based on the limitations of resolution and screen space. I often find that doing this helps me identify potential design issues right away, saving me a lot of time and energy.</p>
<p>It’s also worth mentioning you shouldn’t finalize any of your artwork until you’ve got your core layout locked down, since making layout decisions can have a huge impact on your games visual direction</p>
<h2>3. Balance accessibility, depth, and replay</h2>
<p>Many mobile players prefer games they can enjoy in short bursts instead of marathon sessions. This is one of the reasons card games are so popular on mobile, because they often balance short, accessible gameplay loops with high replayability and long-term metagame progression.</p>
<p>When designing card games, it’s worth paying close attention to how long it takes to play through a game and tuning that to be a short, highly replayable experience. Making sessions fun and punchy is important, but also explore ways to layer on progression mechanics, unlockables, and other goals that give players a reason to keep coming back for more.</p>
<h2>4. Make the most of the digital format</h2>
<p>Physical card game design often tends to be centered around a limited set of simple rules and mechanics—things that can be easily digested without pushing players over the edge of information overload. With digital card games, however, you can get away with much more complexity because you can build it into the behind-the-scenes system that runs the gameplay.</p>
<p>From dice rolls and stat tracking to randomization and special events, a lot of the nitty-gritty can be handled by code, freeing up players to focus on whatever you present to them. This opens the door to weaving lots of unusual genres into card-based designs. Missile Cards, for example, just wouldn’t be possible as a simple card game without forcing players to keep track of an excessive amount of information. But the digital format let me design systems to automatically handle a lot of the complexity, allowing players to concentrate on the strategic defense gameplay and simply enjoy the experience.</p>
<h2>Quick tips for digital card design</h2>
<ul>
<li>Pick a strong, distinct theme</li>
<li>Make your visual design pop</li>
<li>Aim for short, highly replayable gameplay loops</li>
<li>Take advantage of the digital format</li>
</ul>
<h2>Get my free eBook to learn more</h2>
<p>To learn more, download the&nbsp;free eBook titled, &quot;<a href="http://m.amazonappservices.com/missile-cards-ebook" target="_blank">Behind the Scenes: Lessons Learned from the Making of Missile Cards</a>.&quot; I'll share more tips on the complexities of card-game design, how to design for multple devices, and tips to bring your game to life.<br /> &nbsp;</p>
<p><a href="http://m.amazonappservices.com/missile-cards-ebook" target="_blank"><img alt="eBook_Button.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AppstoreBlogs/Unity/eBook_Button._CB489599759_.png?t=true" style="display:block; margin-left:auto; margin-right:auto" /></a></p>
<p><br /> <br /> <img alt="Nathan-Meunier-headshot.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AppstoreBlogs/Influencerblogs/Nathan-Meunier-headshot._CB472623413_.png?t=true" style="display:block; margin-left:auto; margin-right:auto" /></p>
<p style="text-align:center"><a href="https://nathanmeunier.com" target="_blank"><em>Nathan Meunier</em></a>&nbsp;<em>is an indie developer, freelance writer, author, and creator of Missile Cards. His work has appeared in more than 40 publications including Nintendo Power, PC Gamer, GameSpot, EGM, and many others. He is also the co-founder of indie studio <a href="https://touchfightgames.com/" target="_blank">Touchfight Games</a>.&nbsp;</em></p>