Magic, an SMS-based buying assistant, was the most coveted company at YCombinator’s Demo Day in March 2015, ultimately raising $12 million from Sequoia. In the following months, GoButler raised $8 million from General Catalyst for an almost identical service, and Operator announced it had raised $10 million from Greylock. Suddenly, everyone was talking about the battle for tech’s next frontier.

(…)

Pavel Durov announced the expansion of the Telegram Bot Store and Ted Livingston staked out Kik’s claim to be the WeChat of the West. By the end of the year, Slack had announced the Slack App Directory, supported by an $80 million fund to fuel the growth of the ecosystem, and Google was rumored to be developing its own chatbots.

And while the hype will most certainly die down, a lot of the new (and much improved) technology will stay. Are we going to get our news and weather through a kik bot? I don’t know. But right now, there’s a lot of developers making all kinds of different, sometimes even useful bots.

Regular readers of this irregularly updated blog will know that about seven months ago I started a few projects centered around making non-malicious online bots, mainly Botwiki and Botmakers.

When I created the Botmakers Slack group, one of the first things I had to do was to create a Code of Conduct. Now, this is not an easy task as it is (for that reason, we have a dedicated channel where anyone can openly question, discuss and suggest improvements to the rules we all agree to abide).

But an online group where people create something that can in turn interact with other people (and other bots!) poses an extra challenge of its own. That’s why, immediately after compiling what I hope are good guidelines for an online community, I decided that our Code of Conduct should also apply to the bots created by the group’s members.

This actually makes perfect sense. Here’s a little secret, my botmaking friend: You are the bot.

There’s already a huge push towards making the tech industry more diverse, and it’s going to be even more important once these automated “digital assistants” get even more mainstream:

Siri found me 15 places to get a burrito in South Philly after 10pm. Siri found me three videos and five articles when I asked it how to roast a chicken. Siri even gave me tips for winning a fistfight.

But Siri had nothing to offer when I asked for help with rape, sexual assault, and sexual abuse. No resources. No comfort. It didn’t even bother to do a web search.

Here’s another great article highlighting more general problems with writing computer algorithms:

But in another context, user feedback can harden societal biases. A couple of years ago a Harvard study found that when someone searched in Google for a name normally associated with a person of African-American descent, an ad for a company that finds criminal records was more likely to turn up.

(…)

He says other studies show that women are more likely to be shown lower-paying jobs than men in online ads. Sorelle Friedler, a computer science professor at Haverford College in Pennsylvania, says women may reinforce this bias without realizing it.

As exciting it is to be part of a new trend, people participating in it also have a huge responsibility.

What we’re witnessing could be the early days leading up to the future where machines and the way we interact with them — and they with us — are much more humanized, and much more entrenched in our society. But all we’re doing is taking our understanding of what being and interacting with a human is and trying to write programs and dialogs based on that.

The bot knows a few hard-coded phrases, so as long as the creator is a nice person, the bot has a very low chance of offending anyone.

With bots where you have complete control over what they say, it’s all about understanding who your audience is and adjusting your language accordingly. The best you can do here is to test your bot with a diverse audience and be very open to the feedback you will receive.

Some bots crawl various data sets and post what they find. With these bots, it really depends on the particular data set, but in general, since you don’t have complete control over your bot’s output, you will probably want to review the data before using it, and keep an eye on your bot’s output. Adding a simple word filter, if necessary, won’t hurt.

The very least you’d have to do is to add a basic filter, so that your bot doesn’t post tweets with offensive words. But as I myself learned through writing some creative regular expressions to block the N-word when my Detective game somehow started getting popular on 4chan, people will find a way around any obstacles.

You will definitely want to add a more advanced filter, for example, based on the sentiment of the input.

Here’s another example from my own experience: Eddbott, a work-in-progress Twitter-based multiplayer Tamagotchi, can detect when you’re trying to feed it through uploading an image. But it will refuse to “eat” it, if you accompany the image with an insult.

@eddbott can detect if a tweet contains an image of food … and if the accompanying message is positive, it will “accept the food”, which will decrease the eddbott.stats.hunger. And if, for example, you tweet an image of a slice of cake and say “Choke on this, you bastard”, it won’t accept the food.

So really, it’s all about creative thinking, listening to feedback, watching what your bot says, and learning from other people’s experience.

Here are some more specific suggestions:

join the Botmakers Slack group to share your experience, ask questions, and get feedback