How to use SiriKit to improve your app’s experience

When the first iPhone was introduced, we were witnesses of the new era of GUI Design. But when a few years later on 4th October 2011 Apple showed us iPhone4s with Siri, virtual assistant, nobody imagined that it will be the beginning of a next big thing in UX Design – Conversational User Interface.

Now, all big companies offers their voice assistants often connected with their chats or messengers to help users of their platform to accomplish daily tasks. What is good for you, mobile application creator, is the fact that there is now a possibility to integrate your solution with this revolutionary voice interfaces.

In this post you will find the answer how your app may successfully integrate with Siri.

What is Siri

If you are not familiar with Apple products, basic thing you have to know is that Siri is integral part of Apple mobile phone since iOS 5. It is a solution that works as personal assistant initially on iPhone, but it was successfully implemented other devices running watchOS, tvOS and finally on macOS. Since iOS 11 Application developers are able to integrate their solutions with Siri through SiriKit

What is SiriKit

SiriKit is a framework created for iOS Developers that allows to combine features of their solutions with personal assistant created by Apple. Your app’s users can get their things done without touching the icon of your solution, they may use just their voice to do it. Here you can read how Apple describes it’s product.

For now SiriKit is available only for iOS 10 and newer, but we can expect that framework will be implemented also in other Apple devices.

How to use SiriKit with app

You can possibly imagine countless number of features that may be done with voice assistant. However, for now Apple allows six domains to integrate with it.

Siri Kit may be use in context of:

Messaging – send text messages through the app

Audio and Video calling – initiate video or audio calls

Searching photos – search for the image with requested content type

Payments – send or request payments from other people

Ride booking – request a ride through the app providing taxi services and similar ones.

Workouts – manage workout through simple voice commands

Climate and radio – control climate or adjust the radio while they’re in their car through CarPlay automaker apps.

If your solution includes one of mentioned features, you are able to integrate with framework. However, Apple will surely expand possibilities in the near future. The next thing you should know is following: each of six domains include several different “intents”.

The Intent is an action that Siri understands. After assistant recognizes an intent, it communicates with your solution to perform task. What is worth mentioning, communication is done even if your app do not work at the moment. You can browse full list of intents here.

While is Siri recognizing the domain’s intent it goes into the phase called “vocabulary step”. Virtual assistant has a vocabulary database used to understand user’s command and map it into the intent. You solution has to provide it’s vocabulary to Siri, then it will be able to recognize commands for you app. It is important to design vocabulary database appropriately.

“Hey Siri start 7 minute workout now from Home Workout app.”

Visual Interface of Voice Assistant

Siri is mainly a software using conversational user interface. This means that interactions are done though voice or typing. However, there are key moments when there is a need to design a visual interface.

Apple prepared an convenient tool for this purpose calling Intent UI which can be used when you build extensions for Siri and Maps. The Intents UI extension provides only the UI elements needed to present the response to the user. To learn more about possibilities of Visual Interface Design with Intents UI read official docs here.

Creating Good UX of your solution with Siri

Typical interaction with Siri has following steps:

Speech – The user says a command

Intent (An app extension) – Intent defines if the app may react to the spoken command

Action – The app reacts to the intent by preparation of the appropriate action

Response – The user confirms that the correct action has been shown and performs it.

Knowledge about the each phase of this path will let you prepare better User Experience of your solution’s extension.

First, inform user that he has a voice

Before your user will interact with your app through Siri, he has to know that your solution has such possibility. You should inform users that they are able to interact with your solution through Siri. There is also a possibility to prepare examples in Siri’s help. This samples will let your user test app’s extension.

Next, change the way you think

You probably spend your working hours designing or implementing GUI of screens in your app. It is time to think in a much different way. Start the conversational design. You have to remember that user will have to keep all information, the context of the task in his head. Keep the conversation brief and clear. Do not overwhelm user with too many details, provide only relevant information. Keep the answer as short as possible. Remember that user may use Siri while he is driving a car in a hand-free mode. You should not distract user with Ui elements then. Siri interaction works best when you stick to the answer & question scheme.

Finally, Test your app with Siri

Conversational design is very different from creating Graphical User Interface. You cannot see the effect of your work, you have to test it in action. Test your app with Siri in multiple situation and environments. Try to speak with different accents and various velocity and timbre. Only this way you will be sure that it works fine.

Wrapping up

Siri will evolve and play huge role in the next release of iOS. If your app’s type is allowed to use SiriKit you should integrate it with this voice assistant. If you are curious how iOS 11 will probably change this year you should definitely read following article – Future of iOS in 2017