Share

A Boston-area tech startup with a pretty bold vision is announcing its expansion to Silicon Valley today. Waltham, MA-based Affectiva, an MIT Media Lab spinout, is opening an office in Santa Clara, CA, where its CEO Dave Berman is based, along with other key members of the team.

Affectiva, which has about 25 employees (most of them in Waltham), is looking to hire 15 people in Santa Clara, mostly in sales, marketing, and business development, Berman says. He’s a former WebEx veteran, and in fact Affectiva’s new office sits right across from his former company, which is now part of Cisco Systems.

“I’ve been commuting [to Boston] for two years,” Berman says.

Affectiva, which started in 2009, uses computer vision and machine learning software to interpret people’s facial expressions, gestures, heart rate, and other physiological responses. The idea is to be able to track and understand consumers’ emotions as they do things like watch video ads, play computer games, or interact with Web or mobile content. The software works in conjunction with a webcam and/or biometric sensors, like a wristband that monitors temperature, motion, and sweat—and consumers have to opt in to all of that, of course.

The company’s technology comes from the Media Lab’s “affective computing” research group, led by professor Roz Picard. Picard and research scientist Rana el Kaliouby are founders of the startup, which most recently raised $5.7 million in a Series B funding round led by Kantar (part of communications firm WPP) and Myrian Capital. “I wanted something that was next-gen, a software-as-a-service model, a huge opportunity. And something pretty early,” says Berman about what attracted him to work at Affectiva. “I met Roz and Rana and they blew me away. This is going to be everywhere.”

Talking to Berman, I still felt like I was in a sci-fi movie, though. I’ve followed Picard’s research over the years—including recent work with el Kaliouby on autism-related technologies—but I don’t have an intuitive feel for how well a computer can classify a person’s emotions based on their expressions. (I would guess not all that well, though it’s improving.)

Berman walked me through a demo that was eye-opening. The company had volunteers watch a series of TV commercials—standard baby-product stuff, for instance—while a webcam captured their facial expressions. Affectiva’s software identifies key movements and facial markers—regions around the eyes, lips, and more—and infers positive and negative reactions during the course of each ad. What’s more, it tries to distinguish between different mental states, such as frustration, confusion, excitement, contentment, concentrating, interested, disagreement, and unsure. (Another Affectiva technology that is in the early stages of development involves monitoring heart rate based on webcam images that can be analyzed to track the blood flow to a person’s face.)

The business idea is to “send real-time insights back to the marketer,” Berman says, and to provide a sample pool of “thousands of [consumers] in seconds.” On the advertising front, the company is already making money. Its customers include Disney Media, Millward Brown, IPG, and some 150 universities and nonprofits on the research side. But you can also imagine a much wider range of applications, if enough people opt in to the technology—which Berman says could help make “content more relevant to them.”

Imagine retail kiosks that react to customers’ expressions as they browse products by trying to show them things they like, he says. Or websites and smartphones that react to people in real time as they interact with content, play video games, take online education courses, or watch presidential debates. It’s all part of the big vision of computers that understand their users’ emotional states—something that’s been talked about for decades, but is only starting to see the commercial light of day now.