ENGL 87400 / Graduate Center, CUNY / Spring 2015

The Moral Compass Behind “Weird Dog Bot”

When I started searching around for information on Twitter Bots, the same page kept coming up: Automation Rules and Best Practices. Here, Twitter outlines their philosophy on the automatic triggering of certain events across their site.

The bot I attempted (and am still attempting to) create this week is Weird Dog Bot, which will scrape the phrase “my dog is weird” off of Twitter, but only if the user has also included an image. Pretty much, a repository for weird dogs on Twitter (though, this would be perfect for Instagram as well). For some, I imagine this is not a very difficult bot to create, but I have so far dipped into a few tutorials (I practically completed the Wolfram Alpha one, which was very simple) and found them in one way or another lacking in doing exactly what Weird Dog Bot wants. The closest I found was a Python Retweet Bot on Github, but as far as I can tell, it has no metric to detect whether a tweet contains a photo, though I’m sure that’s not too difficult. Over the past year I’ve kept looking at the Twitter API Tutorial on Codecademy and telling myself I’m going to go for it, but I’m always distracted by my want to learn Python and Javascript first.

Anyway, automation: is it evil? If anything, it’s more of a chaotic neutral by default (or perhaps more on the ‘lawful’ side, as it follows a set of guidelines) but certainly there are reasons why Twitter has a page on what kind of automation is frowned upon. There are the malicious sort of bots that masquerade as humans and then the good sort of bots that will take images you send it, and tweet back what seems to be a randomly-generated quilt version. Neat. So how do you figure out where your bot falls “morally”? Is Weird Dog Bot a Terminator (good movie, bad robot), or a Chappie (bad movie, good robot) ? Well, even with this analogy, the “bad robot” is the one who pretends to be human, the terminator. Chappie, on the other hand, never pretended to not be human–he was obviously a machine, albeit one who cannot quilt (or could he? I didn’t see it).

That’s not good enough though, right? Because sometimes there are bots that you know are bots and they’re terrible: spam bots. Alternatively, its likely that there are “good” bots out on the Twitterverse leading a perfectly normal “life”, not even trying to catfish anyone; but we wouldn’t know it, because we’d simply assume they’re human. With that said, if you don’t know the user personally and they happen to be a bot (a non-malicious one, let’s assume), then what difference does it make? They’ll have a conversation, favorite stuff, retweet, etc. just like a human would. I could simply operate the Weird Dog Bot account if I wanted to, yeah? Use the search feature, sort by “New” and retweet pictures of weird dogs, and it wouldn’t be considered bad. So, if I created a bot to do it for me, how could it be bad, right? Further, the account has “bot” in the name. He’s obviously a Chappie, come on!

Some people just don’t like interacting with bots though. Have you ever been retweeted by a bot, or received a reply from a bot? It doesn’t feel “special”, right? It just so happens that you triggered the right terms in their engine to elicit a response…