Bots: Not Just for Cybercrime

Bots, or robots, serve a command computer by carrying out automated functions at their master’s bidding. Sounds ominous, don’t you think? In fact, common uses of bots include crimes, such as denial-of-service, extortion, identity theft, spam, and fraud. Multiple infected computers together form a botnet, with each individual computer termed a zombie, implying a lack of control over its own functions. Although bots are essential tools to the modern cybercriminal, not all bots have a malicious intent, as we can see most clearly when analyzing certain cases from Twitter.

A 2009 report discovered that one-quarter of all tweets come from bots—the percentage now is surely much higher. While a number of these bots were created for spamming purposes, others are from popular sites like IMDB or Digg. Still others were personae invented out of, one assumes, a desire to explore the capabilities of the medium.

In one instance recorded early last year, a Japanese blogger was shocked to uncover the truth about two “really close” friends on Twitter that were in fact bots—bots designed as part of a contest for realistic Twitter bots, but bots nonetheless. In a subsequent post, the blogger describes his initial dismay, but ultimately reflects, “…I hardly have any real contact with most twitter users. They’re all people I talk to exclusively through this intermediary, twitter. In the end, apart from real acquaintances, the majority of these people are not humans, but just ‘twitter accounts.’ It really has nothing to do with whether they’re bots or not.”

A TechCrunch article explored the fictional persona of Jason Thorton, a bot developed in three hours as a side project. The real human behind Jason observes, “When the free flow of information becomes open, the amount of disinformation increases. There’s a real need for someone to come in and vet the people we ‘meet’ on social sites — it will be interesting to see how this market grows in the next year.”

These two examples raise some interesting points about the nature of online interaction. Bots will surely remain difficult to police, and as more stories like the aforementioned crop up, will people question the depth of the relationships they form through sites like Twitter, where communication is truncated to its briefest incarnation? Or, will simulated responses become par for the course, and appreciated in their own right as contributions to online dialogue?