Engineers Can’t Avoid the Ultra-Personalized Future of IoT

The future is here with fast-evolving IoT devices integrated into every aspect of our work and home lives. But are these smart devices safe? And could we even avoid this invasion anyway?

The IoT has been in the media hype for several years now. We are becoming more and more connected, and the IoT is here to stay. In fact, according to Gartner, the number of connected things will surge from the current 11 billion to 20.8 billion by 2020.

The explosion of smart consumer gadgets has been powered by universal wireless connectivity, cloud computing, low-cost sensors, and artificial intelligence. The growth in each of these areas is seeing the IoT coming to fruition, changing the way we live our lives for good.

But how is this invasion a positive evolution of current technology? Could this connectivity be a warning signal for a future monopoly and control of data?

AI Makes IoT Smarter

Our smart objects can collect and analyze data, learn our habits and know what we want — while, of course, communicating their decisions to us and other connected things. Integrating AI with a smart home adds a level of personalization that will truly allow the seamless integration of progressive technology with the daily lives of consumers.

AI is playing an integral part in shaping the mainstream adoption of IoT — it's progress means the inevitable adoption of the IoT, despite what some more entrenched designers might argue.

Image courtesy of Wikimedia Commons.

The Nitty Gritty of Processing Power

The front-facing elements used in our technology, such as cameras and touchscreens, were initially used to passively collect data and then process it in the cloud. Now, cameras are able to do more than capture images. They can understand what they see while microphones can listen and understand what they hear.

The processing power being developed is about to be pushed to the edge. As everyday devices become more powerful, the load is reduced on data centres and, in some instances, eliminates the need for cloud capabilities. Doing more computations and analytics on devices themselves enables reduce latency for critical applications and device-to-cloud data synching for applications harnessing machine learning.

In fact, the proliferation of machine learning and AI for IoT applications is what is driving edge computing. Devices need to run complex deep learning networks quickly and without using much power. This is driving the adoption of heterogeneous compute architectures and incorporating engines, such as CPUs, GPUs, and DSPs, in the devices themselves. Assigning different workloads to the most efficient compute engine improves both performance and power efficiency.

Image courtesy of Pixabay.

“Hey Google/Alexa/Siri”

The first versions of voice assistants introduced could produce appropriate responses by simply detecting cue words and transmitting pre-programmed responses. Today’s smart speakers, however, have taken this to an entirely new level, becoming increasingly human-like and multi-functional.

From the humble beginnings, we now have access to a family of assistants, including Siri, Alexa, Cortana and Google Assistant. These voice assistants can turn on your lights, order food, play games with you, and converse.

Natural Language Understanding

This is key to enabling an AI assistant's reading comprehension. The assistant first picks out proper names, then classifies words into eight lexical categories. Parsing then analyzes the string of language in accordance with grammatical rules.

Speech Synthesis

The artificial production of human speech is a major part of the personalization of these AI assistants. While these robots may not pass the Turing Test, their similarity to a human appearance by projecting an adult female voice makes them more trusted by users. But how does this technology translate text into synthetic speech?

Voice assistants need to abbreviate the text input into graphene through the lexicon. The graphene is then translated into a phoneme string, allowing it to distinguish one word from another. Voice assistants use the phoneme to create prosodic modelling to annotate pitch, length, volume, intonation and rhythm. The phoneme string and prosodic annotation are converted into smooth speech by acoustic synthesis models.

Dialogue System

AI assistants need to detect what you need and decide how to respond. They do this by classifying features to dialogue acts such as requests, statements or confirmations.

Systems also require a state tracker to be able to maintain multiple sessions of interaction and record most recent information. Reinforcement learning is now being implemented to make assistants more astute in their responses.

Image courtesy of Pixabay.

The Unavoidable Intimacy of IoT

Smart speakers haven’t as yet reached the intelligence level of Samantha — the AI system in the film “Her”. However, they are definitely becoming more widely adopted, with more recent models using machine learning to become hyper-personalized for individual users.

Developers are working on a Hybrid Emotion Inference Model (HEIM) which uses a Latent Dirichlet Allocation (LDA) to detect text features and a Long Short-Term Memory (LSTM) to model acoustic features. From vocal inflexions to product selections, favourite TV shows to intimate messages to loved ones, these AI assistants will have access and understanding of the deeply personal aspects of consumer lives — and predict what we will do next.

As the IoT evolves, issues will need to be addressed around privacy and security to manage the vast volumes of data that the devices will generate. The IoT crosses boundaries between enterprises and consumers, which opens up a lot of potential issues. But, they also serve endless solutions.

All of this technology is being pushed further and further, and even though these IoT devices might seem like silly consumer commodities at first glance, there’s no way any of us will avoid it. The IoT is here to stay.