Last year, I was working with a telecommunications software company
that wanted to integrate a chatbot into its solutions. Like many
companies, it was looking to reduce support costs including calls and
TWC (time with customer). The company researched a handful of chatbots
and quickly settled on one. Living by the fail-fast methodology that so
many agile shops love to embody, it launched the chatbot into its
software.

Of course the chatbot was not positioned as a bot; it was given a
name (Sally) with a nice headshot. If you were on a screen and idle for
a spell, Sally would pop up in the lower right-hand corner and ask if
you needed help. True to the fail-fast mantra, my client didn't
test Sally--it launched Sally. Then they ate cake in the break room and
celebrated how Sally would save them so much money. After that, the
calls started.

At first, it was just the usual customers--the ones who always call
when any of the software changes. But more calls started coming in from
previously happy customers. Many customers initially engaged with Sally,
but quickly learned she was limited in her ability to help. For example,
if customers needed assistance with their account info, they could ask
Sally how to change their password. Sally would answer a question, but
then try to make conversation. Sally used the customer's name,
location, time of year, and other vital datapoints to construct a
seemingly lifelike conversation.

Problems arrived when Sally started profiling customers. In one
instance during the holiday season, Sally was communicating with a
customer named Rhonda. Using the time of year and Rhonda's gender,
she asked if Rhonda was busy making dinner for her family. As a single,
career-focused non-cook, Rhonda took tremendous exception to
Sally's assertion that a female would be busy in the kitchen during
the holidays.

In the ensuing weeks, the calls, emails, and negative social media
outcry persisted. Sally lasted less than a month. Limited resources and
the associated costs have not allowed my client to fix Sally.

Whenever I relay this story folks always ask why Sally was a flop.
Sally was not the problem; instead, it was the data she inherited that
was the issue. The problem with Sally--and so many of her chatbot
cohorts--is that she was fed bad (in some cases, harmful) data that was
merely transferred over to her codebase.

FIXING AI WITH CONTENT STRATEGY

We crave Big Data, but increasingly invalidated data is flooding
our AI-assisted systems. AI relies on algorithms, but in many cases,
those computations contain poor legacy data. In Sara
Wachter-Boettcher's book, Technically Wrong: Sexist Apps, Biased
Algorithms, and Other Threats of Toxic Tech, she writes, "Reliance
on historical data is a fundamental problem with many algorithmic
systems." You think many of today's chatbot startups are
checking the validity of the data being fed into their algorithms?

One way to combat this is to use content strategy artifacts.
Personas, research, voice and tone style guides, and a fully vetted
taxonomy are vital tools that can be reused when configuring chatbots.
Any chatbot should be internally piloted and tested before being put in
front of customers. Usability testing your chatbot is a prime area in
which you can revisit your voice and tone guidelines to ensure its
language is adjusting, depending on where users are in their journey.
And just to be safe, have your chatbot skip the small talk.