Executive Summary

Businesses naturally focus on what’s easily measured in their efforts to evaluate and improve performance and customer experience. That bias is amplified in traditional market research, where people are typically asked closed-ended voting and rating-scale questions that yield responses which are easily quantifiable and repeatable. But often the greatest insight is found in spontaneous conversation with customers—not in the online survey that shoppers are asked to complete, but in the photos they take, the tweets they post, and the advice they offer in online forums. So instead of forcing people into the role of “respondent” and limiting their input only to the answers to questions we’ve thought to ask, we encourage our community members to share in multiple ways, knowing that machine learning will make us more efficient in interpreting many forms of organic, unstructured human expression.

How can companies use machine learning to efficiently understand the needs and wants of their customers, without sacrificing the insights that come from employees’ intuition and empathy?

My company is in the business of helping other firms create new products and services that will be both functionally useful and emotionally resonant with customers. As part of this work, we solicit materials online from a firm’s customers and potential customers. In a given year, we receive approximately 13 million unstructured text submissions and over 307,000 photos and videos from about 167,000 diverse contributors, all of whom are answering open-ended questions posed by us, as well as generating their own conversations on topics of their choosing. Our challenge: finding the unmet needs and often unarticulated longings in this wealth of content. To do this, we use a method of human-supervised machine learning that we think other companies could learn from. Here’s how it works.

Traditional computer programming relies on articulating a set of explicit rules for the computer to follow. For example: If the phrase contains the word “mad,” code it as being negative or If the object in the picture has four wheels, tag it as a car. But what happens when the four-wheeled object comes in a box of Cracker-Jacks or a Happy Meal? Should it be tagged as a toy? As a swallowing hazard?

You can see the limitations of this rule-based approach when trying to understand unstructured human expression. To be “mad” is to be insane or angry; to be “mad about” is quite the opposite. And not only can four-wheeled objects be vacuum cleaners or pull toys, but cars can have three wheels.

Nobody can write or articulate all the rules for classifying all things, and they certainly can’t document all of the ways human emotion is expressed. As humans, we learn, classify, and act based on pattern recognition and past associations. We make lightning-fast assumptions based on patterns, purpose, and context.

The type of machine learning we employ—supervised machine learning—also relies on learning from past associations. By providing examples that we’ve already classified, the computer can “learn” from experience without being explicitly programmed, and get smarter over time as that experience accumulates.

Machine learning is only one tool in our always-evolving toolkit. But it’s a very helpful one—and an approach that reflects our commitment to making companies more human—for multiple reasons.

For example, businesses naturally focus on what’s easily measured in their efforts to evaluate and improve performance and customer experience. That bias is amplified in traditional market research, where people are typically asked closed-ended voting and rating-scale questions that yield responses which are easily quantifiable and repeatable. But often the greatest insight is found in spontaneous conversation with customers—not in the online survey that shoppers are asked to complete, but in the photos they take, the tweets they post, and the advice they offer in online forums. So instead of forcing people into the role of “respondent” and limiting their input only to the answers to questions we’ve thought to ask, we encourage our community members to share in multiple ways, knowing that machine learning will make us more efficient in interpreting many forms of organic, unstructured human expression. In that sense, it enables us to be more human, and more customer-centric.

Machine learning doesn’t relieve us of the need for (and the great pleasure of) exploration. Rather, it serves as our metal detector, surfacing the signals in the data and alerting us of where to dig for gold. For example, in a private community that we ran for people with schizophrenia we expected and saw plenty of conversation about symptoms, medications, and side effects. But when we analyzed the unstructured text emerging that group we saw an unusual number of references to art, music, and writing. That led us to more deeply explore the importance of creative expression in these patients’ lives, which in turn informed our client’s messaging and support programs in new and powerful ways.

This sort of analysis comes with risks and limitations. Chief among them are the biases implicit in the training sets themselves, which can lead to wrong, ineffectual, or even unethical conclusions. After all, computers aren’t curious. The machine can’t ask, “Whose perspective haven’t we solicited?” It can’t suggest “What if we asked the question differently?” It remains incumbent on us as thoughtful, self-aware people to do that, and to audit our algorithms for bias.

Moreover, machines lack the human qualities that are so essential to business growth. While they can be taught to recognize sentiment, they can’t be taught to feel. Emotional arousal is crucial to driving both individual and organizational change and building strong consumer relationships. And because computers lack emotions, they lack the power to empathize with or excite ours.

That emotional deficit—which in turn creates a relational deficit—is why we tend to treat machines as tools, not as colleagues. As Kurt Gray observed in a fascinating HBR article, “Trusting team members requires at least three things: Mutual concern, a shared sense of vulnerability, and faith in competence. Mutual concern—knowing that your teammates care about your well-being—is perhaps the most basic element of trust … We mistrust AI not only because it seems to lack emotional intelligence but also because it lacks vulnerability.”

Absent that most vital element of trust—mutual concern—we’ll continue to value and use machine learning, but not “relate” to the machine. But when those human ingredients are in place, companies can forge strong, durable consumer connections that no machine can help build, rather than replace.

Julie Wittes Schlack is a cofounder and the senior vice president for innovation and design at C Space, a global customer agency.