Affectiva's technology helps AI assistants embedded in chatbots and driverless cars to understand how you are feeling

Humans are already forming relationships with their artificial intelligence (AI) assistants, so we should make that technology as emotionally aware as possible by teaching it to respond to our feelings.

That is the premise of Rana el Kaliouby, cofounder and CEO of Affectiva, an MIT spinout company that sells emotion recognition technology based on her computer science PhD, which she spent building the first ever computer that can recognise emotions.

The machine learning-based software uses a camera or webcam to identify parts of human faces (eyebrows, the corners of eyes, etc), classify expressions and map them onto emotions like joy, disgust, surprise, anger, and so on, in real time.

“We are getting lots of interest around chatbots, self-driving cars, anything with a conversational interface. If it's interfacing with a human it needs social and emotional skills. This tech is already being integrated into robots,” el Kaliouby tells Techworld during an interview at Web Summit.

Research on a chatbot system integrated into cars found people confiding deep, emotional issues with it around their own ill health, or the fact they were in abusive relationships, according to el Kaliouby.

“People are building deep relationships with these things, yet they're not built with any emotional intelligence. If they're doing this anyhow, we'd better respond to it,” she says.

The technology has a wide range of applications. It was developed by el Kaliouby and Rosalind Picard at the Massachusetts Institute of Technology (MIT) to help people with autism to understand social and emotional cues, but they spun a startup out in 2009 after being encouraged by their faculty to commercialise. Their technology was initially largely adopted by advertisers.

“The problem we're solving there is that marketers and advertisers want to evoke emotions but have no way to assess if they're good at it or not, despite spending many millions,” el Kaliouby says.

Despite the impressive technology, it's hard to feel inspired by advertisers trying to manipulate our emotions to sell us things.

“A couple of years ago we had our heads down working with advertising. It's a great market, generates a very high profit. But in late summer I was like 'what am I doing?' We kind of lost our sense of purpose,” she says.

Affectiva decided to develop an SDK [software development kit] and make it free for any company with under $1 million annual revenue to use, or for personal, open source or academic projects.

The technology is now being used by a wider pool of clients, and is currently being tested by game developers and creative agencies to create 'adaptive' games and films that alter storylines according to the emotional reaction of the user.

While the technology itself is of course neutral, it is powerful and you can't help but feel it could be potentially quite dangerous if used by the wrong people. Does el Kaliouby perceive any risks?

“We're very big on informing people it's running. We never turn on the camera without people knowing, and we never record faces without explicit opt-in. For advertising research you get paid. And we've stayed away from some applications. We've been approached by security agencies. We probably could make a ton of money but that doesn't match our values,” she says.

She says a company called HireVue are using it to assess candidates via video rather than CV, getting them to send in a one minute video pitch which they then analyse for emotional expression, using the software to rank applicants' suitability.

Some of this technology is so close to the realms of sci-fi it's hard to resist asking the question: “Have you ever seen Black Mirror?” She hasn't. Yet.