Inspiration

Voice recognition has such potential, our kid´s education is a fantastic ground to put so much information at their disposal by using voice and natural language as catalyst.

Using the capabilities for a human-like dialog and all the features for voice shaping in Alexa, we are convinced that a compelling storytelling learning application can be made available for kids.

What it does

Trigger curiosity into a broad scope of general-knowledge topics by engaging into funny facts and not-so-known discoveries that changed the way a kid perceives his/her world.

Each story is randomly generated and both characters as facts are being customized to each choice made.

The application customizes the experience by saving his/her progress to each scenario. Such saved progress is available for further interactions making it more compelling and simple to return over and over again while exploring all scenarios available.

How we built it

The core application runs on a Lambda function built on Python 3.6 which captures all events configured in our Interaction model and defines:

When to read from our configuration files repository in S3.

When to store or retrieve session status from our DynamoDB table.

When to trigger new selections or close the session definitely.

The configuration file repository is critical since it stores all data being created by any content creator while providing full customization:

Characters with specific voice features like pitch and rate as well as bound them to specific areas.

Scenarios and places available within those scenarios; for complex and more engaging interactions.

Our DynamoDB table is responsible for storing session information like current scenario, character and questions being shared. Everything is bonded by a sessionID (created from timestamps) stored at application launch.

Finally, all images used were obtained thanks to the amazing Unsplash and its artist community including: Annie Spratt, John Cobb, Andrea Reiman, Matt Sclarandis, Brigitta Schneiter, Mohammad Metri, Pedro Lastra, Ryan Grewell and many others.

Challenges we ran into

Making sure that we understood directives correctly was critical due to the conversational nature of the application and the demand for Render-Templates on our Echo Show development.
Making sure that our selection of S3 and DynamoDB storage could scale to meet demand was tough, since we focused on preserving our innovative randomly generated interactions on complex configuration files.
English is not our native language so triggering the right Skill in echosim.io was quite funny and time-consuming at times.

Accomplishments that we're proud of

Being able to have the basic interaction model in a few days while orchestrating the functional aspects in parallel.
Creating a framework that enables parents and other content creators to add stories, characters, facts and questions without modifying our core Lambda function.

What we learned

How to operate under an Alexa Skill kit workflow
Improving our skills in Lambda functions basics like interacting with DynamoDB and S3 files in a more refined way.

What's next for ImaguVoice: Learn with imagination

We would love to continue to improve our solution in all fronts, but our short-term priorities are:

Including customized illustrations in our Echo show capabilities instead of using full photos from our friends in Unsplash

Enable Render and Dialog directives to work better within our current operation framework in Lambda.

Providing a content creation app for parents to include new scenarios, places, characters and questions. Making sure that its content is permanently revised to keep our application children-safe.

Built With

Try it out

Submitted to

Created by

I worked in the content development team, mainly in the creation of the stories, the selection of the content and the way of presenting it. Additionally, I worked in the application's test group in order to determine errors in the interaction with the user.

I was part of the code development team, and helped making the dialogs more human-like. I also tuned up some other funcions used to leave, help, or select the interest area in the game among other things. Also helped editing the text that was used in the dialogs.

I worked in the content creation team and also on the construction of the marketing material of the Alexa skill. I worked in the application's test group in order to determine errors in the interaction with the user.

I was responsible for creating the base architecture for each application component and how the Alexa Interaction model operates.
I also manage the Skill profile in the Alexa Developer store and contribute on the testing stage.

I worked in the code development team, mainly creating the logic of the dialog during the story, helped cleaning up and structuring the code. Also, worked in the architecture to organize and store valuable data about the sessions played and the user experience for further statistic and a continued improvement of skill.