In a world where so many of us can agree that human connection is essential to emotional health and healing, there is a growing trend to try and create artificial intelligence that can take the place of (or at least fill in for) all that.

Chatbots and other forms of ‘artificial intelligence’ are showing up all over the place trying to offer pre-programmed support to people who might be lonely, struggling, or otherwise wanting connection.

“Joy” is one particularly visible support bot attempting to use pre-programmed responses to simulate support and help people track their emotions over time. “Woebot” is quite similar,

although this version comes with multiple choice buttons rather than allowing for so much free-form typing. And while Joy and Woebot both got their starts through the Facebook chat system, “WYSA” is a phone application that is meant to work as a ‘coach’ trained in such popular approaches as Cognitive Behavioral Therapy and Motivational Interviewing.

Unfortunately, fake or ‘programmed’ connection is often like no connection at all, and sometimes these bots give responses that are quite dismissive or hurtful. While they tend to come with lots of disclaimers, the disclaimers are not prominently displayed, and some people who use these services will likely never see them or necessarily even understand quite what they’re interacting with or why it gives the responses it does.

In addition to that, there’s the question of privacy. Who can see what you say to a bot? Is it programmed to call in ‘emergency’ professionals when you might not be expecting it? What if these bots cause more harm than good?

These are all questions well worth thinking about.

Read “Killjoy: the Story of a Misguided Mental Health Bot” by Sera Davidow for more on this topic: