Siri

You may already be familiar with the recent Alexa dollhouse shopping spree. A morning TV show covering the story of a six-year-old girl in Texas who accidentally ordered a dollhouse and four pounds of cookies through her parents’ Amazon Echo device also caused multiple Echo devices wake up and place the same order. There are currently two solutions to prevent this from happening; you can either disable voice purchasing altogether or alternatively set up a confirmation code that can be verbally input to the echo device to complete a purchase. Neither of these workarounds is 100% satisfactory. One method removes a pretty useful feature from the Echo device while the other has security issues especially if someone overhears you speaking the confirmation code. What the Amazon Echo and other emerging voice recognition services such as chatbots really need is voice biometric authentication.

For many consumers the Amazon Echo device is their first serious engagement with artificial intelligence and voice recognition. Yes Siri has been available on iPhones for years but the services accessed by Siri have been so limited it was little more than a gimmick. In contrast the voice services currently offered by Alexa are simple and quick to invoke with a voice command rather than having to reach for a mobile device or an app and include the ability to verbally select a radio station or music track, get a weather forecast or set an alarm or reminder. Yet while useful the services currently offered on Alexa are pretty basic compared to the more complex purchasing, banking and social media activities we regularly use our mobile devices for.

To move beyond these lightweight services to the delivery of more complex services via smart speakers or chatbots authentication will be required as it is today when doing mobile banking or purchases via a mobile app. Asking users to manually enter a password, use a finger print or visit an app to authenticate a purchase defeats the purpose of a hands free device. Instead voice biometrics will be the key feature that ensures services provided by smart speakers, chatbots and ultimately the IoT can be authenticated and remain friction free.

Voice biometrics isn’t without its own security issues. You only have one voice signature yet you can have multiple passwords. If your voice signature is compromised or hacked you have a problem. Active voice biometrics addresses this security issue by combining voice authentication with a pre-recorded phrase which acts as an additional layer of security. In the same way that banks and credit card companies currently flag up unusual credit card activity a further layer of frictionless security can be provided by analysing the context of the voice commend for example where and when the voice command was made.

Billions of IoT devices doesn’t mean billions of new screens, mobile apps and visual user interfaces all competing for our attention. Most of the services delivered by the IoT will be invisible initiated by a voice commands and only prompting a user when a decision is required. Voice will be the primary user interface for the IoT. In order to move beyond the lightweight IoT and voice services we have today to the delivery of more complex IoT and voice initiated processes voice authentication is a key capability. Pretty soon a voice signature will be as much part of your bank account security as your home address and pin number are today.

Advertisements

Share this:

Like this:

In bricks and mortar retail we have the concept of footfall, which is the number of people entering a shop or shopping area over a given time frame. Businesses pay higher rents for higher footfall locations. In order for organizations to engage with customers they have to locate themselves where customers are likely to hang out. This was the main reason why businesses across all industries rushed to build business apps. If customers are using apps we must build an app. As I discussed in my previous post, for a variety of reasons mobile business apps have failed big time. Customers aren’t downloading business apps. If mobile apps are part of your customer engagement mix they aren’t getting any footfall.

In addition customer service via mobile apps is for the most part pitiful. Most of today’s business apps are deigned to sell not serve. Many business apps throw the user out of the native app into a browser if you want to reach customer service. As for self-service, case management and notifications via a mobile app, forget about it. No wonder few people download business apps.

The emergence of chatbots however provides an opportunity for organizations to engage with customers that overcome many of the limitations of the business app. A chatbot is a type of conversational agent, a computer program designed to simulate an intelligent conversation with one or more human users via auditory or textual methods. Chatbots are integrated within the apps that customers actually want to use. They are integrated within messaging apps such as WeChat, Kik and Messenger which act as the operating systems for the chatbots. Chatbots thus solve the business app problem, enabling customers to engage with a business without downloading an app or going to a website.

Perhaps most importantly, chatbots also reflect the emergence of artificial intelligence or speech recognition as a customer service user interface. Artificial intelligence is slowly creeping into our everyday lives. Today, four of the largest global IT organizations – Apple (Siri), Google (Google Now), Amazon (Alexa) and Microsoft (Cortana) — offer speech recognition software. Customers will be able to interact with the chatbot using either text or voice recognition software. Long term, voice recognition software will become the dominant UI for customer engagement and well as for interaction with the Internet of Things (IoT).

Chatbots represent a new customer engagement channel that will soon co-exist with today’s other multichannel options of voice, chat, email, social and self-service. Chatbots will initially act as a self-service interface with simple escalation options. Eventually chatbots could replace 1-800 numbers and Facebook (for Messenger) is currently providing developers with API tools to build chatbots and Live Chat web plug-ins for business clients.

Like this:

Introduction
When is the last time you visited a bank or even talked to a bank representative? When is the last time you contacted a travel agent? Where previously we’d have used a travel agent many of us now prefer to research and book the individual components of our holiday ourselves. The history of the internet has been a story of personal empowerment. Empowered customers, especially younger demographics, are today more than happy to resolve problems for themselves valuing the convenience, speed and anonymity offered by frictionless self-service experiences.

In 2015 leading analysts indicated that web self-service interactions overtook all other channels and that survey respondents reported using the FAQ pages on a company’s website more than speaking with an agent over the phone. The future of customer service is thus quite clearly self-service. However what is the future for self-service itself?

Intelligent Virtual Assistants
Intelligent virtual assistant (IVA) use is seeing rapid adoption across multiple industry sectors. IVAs attempt to humanize web self-service by delivering knowledge and information to customers via a human like interface. The attraction for businesses is obvious. Unlike a customer service representative IVAs can serve thousands of customers at once, they are available 24/7, across all channels and the experience is consistent.

However the predominant IVA experience today of typing in questions and receiving spoken answers via a computer generated assistant is unsatisfactory, unrealistic and, in my opinion, slightly creepy. Having a computer designed, cartoonish image acting as the face of your organization is also a risk. As a result many organizations, particularly in the retail sector are using IVAs as an opportunity for differentiation and a visual extension of their basic web self-service technologies rather than as a key customer engagement channel.

We are still at the nascent state of IVA development. The next step in its development will be to make the customer experience more realistic. This means replacing the requirement for users to type questions with speech recognition.

Speech Recognition
Today, four of the largest global IT organizations — Apple, Google, Microsoft and Amazon— offer speech recognition software. Apple was the first mover with Siri. Google followed with its own natural language user interface, Google Now, Microsoft came to market with Cortana and most recently Amazon has released its voice-controlled personal assistant, Alexa. So why do we have this sudden interest in voice recognition? The answer lies in both self-service and the internet of things (IoT).

Speech recognition has the potential to transform the self-service user experience, making it a more natural experience and most importantly hands free. Speech recognition can deliver service to users anywhere and in any situation or context. Initially we will see IVA vendors augmenting their solutions with speech recognition capabilities to offer voice driven information search and retrieval such as video on demand. Already retailers have begun to explore the potential of voice driven self-service with for example Domino’s Pizza introducing Dom their virtual voice ordering assistant.

WSS and the Internet of Things
Self-service will move beyond voice driven information search and retrieval to deliver the ability to trigger and interact with processes using speech.

Mistakenly many view the IoT in terms of smart devices; in terms of smart thermostats, kettles, fridges and wearables. However the real power in IoT is in the data created by the sensors themselves. Organizations that can utilize this data and deliver services based on this data will dominate the IoT market. Already some of the most successful commercial IoT solutions are providing services rather than simply devices to consumers. In-car telematics is now widely used in the insurance industry today to monitor driver performance and to adjust insurance premiums accordingly. In medicine, heart implants are being used that send data to physicians that can be analyzed to identify any slowing of heart rhythm or rapid heartbeats. In the public realm, IoT will be used within Smart Cities to provide services to constituents that will, for example, reduce congestion, optimize energy use and support public safety. The common theme in all of these examples is that the IoT is being used to provide services with little or no input required by the individual. The IoT and the devices themselves are for the most part invisible.

As humans we do not have the capacity to engage with huge numbers of IoT devices. Do we really want to be alerted and prompted on a regular basis by multiple frivolous IoT devices all competing for our attention? As a result the IoT will be almost invisible, delivering services for us in the background to our lives and interrupting us only when a decision is required or a result has been achieved. Successful IoT solutions will remove complexity from our lives rather than add to it.

The trigger and interaction method for all of these IoT services will be speech. With so many potential IoT devices and services, voice control provides a simpler, quicker and more convenient method of interaction rather than an app and a UI.

The Personal Virtual Assistant
Self-service will eventually break free of the enterprise and become a key part of our personal lives. Mobile phone vendors have been working on personal assistants since almost the development of the first app. Your iPhone comes preinstalled with apps such as Apple pay, calendar, reminder, notes and a ticket and boarding pass wallet. In addition Android devices arrive with a password manager, family location services and account status apps. Despite the fact that there are millions of apps available the majority of us use only a few apps daily. It appears that app fatigue is starting to set in. Siri, Google Now and Cortana will be the catalyst for turning these apps, many seldom used and sometimes referred to derogatively as bloatwear, into something more useful and easier to use.

The personal virtual assistant will be the ultimate self-service technology that brings together multiple technologies including speech recognition, knowledge management, wearables, IoT, complex event processing and artificial intelligence. This is imminent. Today for example Microsoft Cortana, if approved by the user, can scan your email to see if you have a flight coming up, then use the information to alert you when it’s time to leave for the airport.

Service scheduling, booking a restaurant or flight, insurance renewals, ensuring your finance products are on the best rate of interest, price comparisons, personal security and health monitoring — all of these are important yet mundane processes we perform on a daily basis that most of us would be happy to outsource to a personal assistant. In effect the personal assistant starts to act as an airbag for our lives, ready to step in when we need it, not constantly competing for our attention.

Conclusion
Self-service is the future of customer service and its evolution is inextricably linked to developments in speech analytics and the IoT. In the IoT world speech will be the most convenient gateway to access information and services. Self-service will go beyond the simple search and delivery of information to the delivery of more complex customer service processes.

Ultimately we are looking at a post app world where information and data from multiple sources including devices is combined to deliver services. Then when you need information or a service you’ll just ask for it.

Share this:

Like this:

It looks like our homes could soon be invaded by a swarm of buttons. Earlier this year (2015) we had the launch of Amazon dash and now we have flic. Flic takes dash a step further and is a wireless smart button that can be quickly programmed to let you play music, make calls or even order a pizza, all at the press of a button. What dash and flick have in common is that with the press of a button the user is triggering or starting a process. With dash it is the process of placing and executing an order for a consumable product from the Amazon store or in the case of flic trigger a food delivery process.

There are other companies out there such as IFTTT looking to simplify how we execute day to day processes and optimize how we interact with mobile applications. In fact it is possible to connect Flic to IFTTT and at the press of a flic button post a Facebook or twitter update. Both dash and flic are obviously Internet of Things (IoT) devices. What they begin to illustrate is the critical relationship between the IoT and process. There is no point in smart devices collecting data and monitoring your environment if we cannot rapidly take action on insights obtained.

Yet dash and flic don’t have a long term future. Why do we need buttons to place orders or trigger processes when we could use our voice? Today, three of largest global IT organizations — Apple, Google and Microsoft — offer voice recognition software. Apple was the first mover with Siri. Google followed with its own natural language user interface, Google Now, and finally Microsoft came to market with Cortana. Why do we have this sudden interest in voice recognition? The answer is the IoT.

With the iOS 8 operating system, Siri became completely hands free, with the “Hey Siri” command replacing initiation of Siri via the home button. In addition, Siri began to integrate with the Apple IoT HomeKit features. At iOS 9 Apple have taken Siri capabilities further putting new functionality into the HomeKit, including smarter Siri controls and support for new device types. Siri and thus voice recognition is a key part of the Apple IoT strategy with voice commands being used to trigger simple processes such as dimming the lights or adjusting the thermostat. It does not take a great leap of the imagination to expect that we will soon be able to ask Siri to book a flight, a meal at a restaurant or to transfer money into a bank account, all hands free and without accessing an app or pressing a button.

So while flic and the Amazon Dash button may have a limited shelf life they do point towards emerging methods of process initiation that will eventually coalesce around voice. In fact Amazon, as well as having the Dash Button, also offer a Wand device that uses voice recognition to place order requests. With so many potential IoT devices, sensors and services, voice control provides a simpler, quicker and more convenient method of interaction with the IoT rather than an app, a UI or a button.