These accounts didnâ€™t become uninteresting or offensive overnight. Rather, the far-right Twitter users had a significant percentage of followers that Twitter identified as Russian Twitter bots and purged overnight. The others were customers of Devumi, a â€œsocial media marketingâ€ company whose sketchy tactics are being investigated by the New York State Attorney Generalâ€™s Office after being lambasted by The New York Times only days before people noticed a drop in followers. Â

Twitter forbids the purchase of followers, likes, retweets, etc., as well as impersonation in a â€œmisleading or deceptive manner.â€ And the social media platform ramped up its efforts to tackle spam after it was discovered that bots run by the Kremlin-linked Internet Research Agency were used to sway the results of the 2016 presidential election. However, Twitterâ€™s reporting mechanisms for human users who identify bots continue to be lacking.

Twitter users are split over how the social media platform should handle bots. Many alt-right Twitter users denied that their deleted followers were bots, while others, like Mark Cuban, have gone so far as to suggest that Twitter requires a real name and real person behind every account.

There are many reasons why Twitter doesnâ€™t prevent the creation of bots. Not all bots are created equal, and automated accounts can actually be beneficial to the Twitter community (all of the news outlets you follow on Twitter are at least partially automated). But in the Fake News Era, more human Twitter users have become sophisticated at identifying harmful bots â€“ like the ones posing as real humans to sell Twitter followers or spread propaganda.

So how should Twitter battle bot influence? There needs to be an official registry where bot-like accounts can be flagged for users who donâ€™t want to devote their time to determine whether every account that shows up on their timeline is linked to a real person â€“ and so Twitterâ€™s spam determination policies are more transparent.

Independent Twitter users have already started doing the work â€“ Robhat Labs provides a plug-in software called Botcheck.me that allows its users to check whether an account shows propaganda-bot-like patterns, and report propaganda-bot-like accounts that users find. Twitter Audit is a program created by two users that find the percentage of â€œfakeâ€ followers of any user. Both programs allow you to further investigate the credibility of different accounts on the platform.

Unlike Twitterâ€™s current policy â€“ which allows users to report spam and impersonation accounts for the company to investigate â€“ an official suspected-bot registry would allow users to see which accounts exhibited bot-like behaviors, rather than relying on their own sleuthing. Users with experience identifying harmful bots would be able to see which accounts Twitter was investigating, and could more easily follow up to see whether those accounts had been removed from the platform.

Conservative Twitterâ€™s uproar over losing followers could have been avoided if this fantasy registry existed because a registry would ensure transparency. People charged with crimes are put on public trial before being proven innocent or guilty, why canâ€™t the same be true for bots who spread deception online?

Kremlin-linked groups are not the only organizations that run disinformation campaigns on Twitter, and therefore an official Twitter bot registry would not only list Kremlin-linked bots. The purpose of such a registry would allow more users to see whether Twitter accounts exhibited bot-like behaviors, even if theyâ€™re not trained to recognize such accounts.

Twitter is deep into its war against the bots, and the 2016 election proved that itâ€™s not just the platformâ€™s credibility that is at stake. How Twitter chooses to change its policies related to bots will likely affect how companies, politicians, media, and influencers tweet moving forward.

These accounts didnâ€™t become uninteresting or offensive overnight. Rather, the far-right Twitter users had a significant percentage of followers that Twitter identified as Russian Twitter bots and purged overnight. The others were customers of Devumi, a â€œsocial media marketingâ€ company whose sketchy tactics are being investigated by the New York State Attorney Generalâ€™s Office after being lambasted by The New York Times only days before people noticed a drop in followers. Â

Twitter forbids the purchase of followers, likes, retweets, etc., as well as impersonation in a â€œmisleading or deceptive manner.â€ And the social media platform ramped up its efforts to tackle spam after it was discovered that bots run by the Kremlin-linked Internet Research Agency were used to sway the results of the 2016 presidential election. However, Twitterâ€™s reporting mechanisms for human users who identify bots continue to be lacking.

Twitter users are split over how the social media platform should handle bots. Many alt-right Twitter users denied that their deleted followers were bots, while others, like Mark Cuban, have gone so far as to suggest that Twitter requires a real name and real person behind every account.

There are many reasons why Twitter doesnâ€™t prevent the creation of bots. Not all bots are created equal, and automated accounts can actually be beneficial to the Twitter community (all of the news outlets you follow on Twitter are at least partially automated). But in the Fake News Era, more human Twitter users have become sophisticated at identifying harmful bots â€“ like the ones posing as real humans to sell Twitter followers or spread propaganda.

So how should Twitter battle bot influence? There needs to be an official registry where bot-like accounts can be flagged for users who donâ€™t want to devote their time to determine whether every account that shows up on their timeline is linked to a real person â€“ and so Twitterâ€™s spam determination policies are more transparent.

Independent Twitter users have already started doing the work â€“ Robhat Labs provides a plug-in software called Botcheck.me that allows its users to check whether an account shows propaganda-bot-like patterns, and report propaganda-bot-like accounts that users find. Twitter Audit is a program created by two users that find the percentage of â€œfakeâ€ followers of any user. Both programs allow you to further investigate the credibility of different accounts on the platform.

Unlike Twitterâ€™s current policy â€“ which allows users to report spam and impersonation accounts for the company to investigate â€“ an official suspected-bot registry would allow users to see which accounts exhibited bot-like behaviors, rather than relying on their own sleuthing. Users with experience identifying harmful bots would be able to see which accounts Twitter was investigating, and could more easily follow up to see whether those accounts had been removed from the platform.

Conservative Twitterâ€™s uproar over losing followers could have been avoided if this fantasy registry existed because a registry would ensure transparency. People charged with crimes are put on public trial before being proven innocent or guilty, why canâ€™t the same be true for bots who spread deception online?

Kremlin-linked groups are not the only organizations that run disinformation campaigns on Twitter, and therefore an official Twitter bot registry would not only list Kremlin-linked bots. The purpose of such a registry would allow more users to see whether Twitter accounts exhibited bot-like behaviors, even if theyâ€™re not trained to recognize such accounts.

Twitter is deep into its war against the bots, and the 2016 election proved that itâ€™s not just the platformâ€™s credibility that is at stake. How Twitter chooses to change its policies related to bots will likely affect how companies, politicians, media, and influencers tweet moving forward.