Robot Wars

23rd February 2011

Online astroturfing is more advanced and more automated than we’d imagined.

By George Monbiot. Published in the Guardian 23rd February 2011

Every month more evidence piles up, suggesting that online comment threads and forums are being hijacked by people who aren’t what they seem to be. The anonymity of the web gives companies and governments golden opportunities to run astroturf operations: fake grassroots campaigns, which create the impression that large numbers of people are demanding or opposing particular policies. This deception is most likely to occur where the interests of companies or governments come into conflict with the interests of the public. For example, there’s a long history of tobacco companies creating astroturf groups to fight attempts to regulate them.

After I last wrote about online astroturfing, in December, I was contacted by a whistleblower. He was part of a commercial team employed to infest internet forums and comment threads on behalf of corporate clients, promoting their causes and arguing with anyone who opposed them. Like the other members of the team, he posed as a disinterested member of the public. Or, to be more accurate, as a crowd of disinterested members of the public: he used 70 personas, both to avoid detection and to create the impression that there was widespread support for his pro-corporate arguments. I’ll reveal more about what he told me when I’ve finished the investigation I’m working on.

But it now seems that these operations are more widespread, more sophisticated and more automated than most of us had guessed. Emails obtained by political hackers from a US cyber-security firm called HB Gary Federal suggest that a remarkable technological armoury is being deployed to drown out the voices of real people.

– companies now use “persona management software”, which multiplies the efforts of the astroturfers working for them, creating the impression that there’s major support for what a corporation or government is trying to do.

– this software creates all the online furniture a real person would possess: a name, email accounts, web pages and social media. In other words, it automatically generates what look like authentic profiles, making it hard to tell the difference between a virtual robot and a real commentator.

– fake accounts can be kept updated by automatically re-posting or linking to content generated elsewhere, reinforcing the impression that the account holders are real and active.

– human astroturfers can then be assigned these “pre-aged” accounts to create a back story, suggesting that they’ve been busy linking and re-tweeting for months. No one would suspect that they came onto the scene for the first time a moment ago, for the sole purpose of attacking an article on climate science or arguing against new controls on salt in junk food.

– with some clever use of social media, astroturfers can, in the security firm’s words, “make it appear as if a persona was actually at a conference and introduce himself/herself to key individuals as part of the exercise … There are a variety of social media tricks we can use to add a level of realness to all fictitious personas”

a. Create “10 personas per user, replete with background, history, supporting details, and cyber presences that are technically, culturally and geographically consistent. … Personas must be able to appear to originate in nearly any part of the world and can interact through conventional online services and social media platforms.”

b. Automatically provide its astroturfers with “randomly selected IP addresses through which they can access the internet.” [An IP address is the number which identifies someone’s computer]. These are to be changed every day, “hiding the existence of the operation.” The software should also mix up the astroturfers’ web traffic with “traffic from multitudes of users from outside the organization. This traffic blending provides excellent cover and powerful deniability.”

c. Create “static IP addresses” for each persona, enabling different astroturfers “to look like the same person over time.” It should also allow “organizations that frequent same site/service often to easily switch IP addresses to look like ordinary users as opposed to one organization.”

Software like this has the potential to destroy the internet as a forum for constructive debate. It makes a mockery of online democracy. Comment threads on issues with major commercial implications are already being wrecked by what look like armies of organised trolls – as you can often see on the Guardian’s sites. The internet is a wonderful gift, but it’s also a bonanza for corporate lobbyists, viral marketers and government spin doctors, who can operate in cyberspace without regulation, accountability or fear of detection. So let me repeat the question I’ve put in previous articles, and which has yet to be satisfactorily answered: what should we do to fight these tactics?