Share this story

It's no secret that every major social media platform is chock-full of bad actors, fake accounts, and bots. The big companies continually pledge to do a better job weeding out organized networks of fake accounts, but a new report confirms what many of us have long suspected: they're pretty terrible at doing so.

The report comes this week from researchers with the NATO Strategic Communication Centre of Excellence (StratCom). Through the four-month period between May and August of this year, the research team conducted an experiment to see just how easy it is to buy your way into a network of fake accounts and how hard it is to get social media platforms to do anything about it.

Further Reading

The research team spent €300 (about $332) to purchase engagement on Facebook, Instagram, Twitter, and YouTube, the report (PDF) explains. That sum bought 3,520 comments, 25,750 likes, 20,000 views, and 5,100 followers. They then used those interactions to work backward to about 19,000 inauthentic accounts that were used for social media manipulation purposes.

About a month after buying all that engagement, the research team looked at the status of all those fake accounts and found that about 80 percent were still active. So they reported a sample selection of those accounts to the platforms as fraudulent. Then came the most damning statistic: three weeks after being reported as fake, 95 percent of the fake accounts were still active.

"Based on this experiment and several other studies we have conducted over the last two years, we assess that Facebook, Instagram, Twitter, and YouTube are still failing to adequately counter inauthentic behavior on their platforms," the researchers concluded. "Self-regulation is not working."

Too big to govern

The social media platforms are fighting a distinctly uphill battle. The scale of Facebook's challenge, in particular, is enormous. The company boasts 2.2 billion daily users of its combined platforms. Broken down by platform, the original big blue Facebook app has about 2.45 billion monthly active users, and Instagram has more than one billion.

Facebook frequently posts status updates about "removing coordinated inauthentic behavior" from its services. Each of those updates, however, tends to snag between a few dozen and a few hundred accounts, pages, and groups, usually sponsored by foreign actors. That's barely a drop in the bucket just compared to the 19,000 fake accounts that one research study uncovered from one $300 outlay, let alone the vast ocean of other fake accounts out there in the world.

The issue, however, is both serious and pressing. A majority of the accounts found in this study were engaged in commercial behavior rather than political troublemaking. But attempted foreign interference in both a crucial national election on the horizon in the UK this month and the high-stakes US federal election next year is all but guaranteed.

Further Reading

The Senate Intelligence Committee's report (PDF) on social media interference in the 2016 US election is expansive and thorough. The committee determined Russia's Internet Research Agency (IRA) used social media to "conduct an information warfare campaign designed to spread disinformation and societal division in the United States," including targeted ads, fake news articles, and other tactics. The IRA used and uses several different platforms, the committee found, but its primary vectors are Facebook and Instagram.

Facebook has promised to crack down hard on coordinated inauthentic behavior heading into the 2020 US election, but its challenges with content moderation are by now legendary. Working conditions for the company's legions of contract content moderators are terrible, as repeatedlyreported—and it's hard to imagine the number of humans you'd need to review literally trillions of pieces of content posted every day. Using software tools to recognize and block inauthentic actors is obviously the only way to capture it at any meaningful scale, but the development of those tools is clearly also still a work in progress.

82 Reader Comments

Instagram is really bad, a few of the people I follow have actively tried to get people to report fake accounts that contact followers and try to get them to send them money, go to web pages, call phone numbers, etc and instagram just basically says "not enough proof" even though people send them multiple examples. It's lazy and just a fact that they turn their head the other way to save a few pennies.

Remember the root cause for nearly every trouble on the internet comes down to marketing. Facebook (and the rest) have a vested interest in leaving bots alone because they drive clicks and engagement for the marketing which pays the bills.

The same reason we still can't have a reasonable discourse about limiting ads on websites in such ways to prevent malware except for end users to block ads entirely.

As for the marketing people dumping so much into these economies, they get pretty dashboards and reports claiming their efforts are working wonderfully and nobody in the organization wants to question that house of cards. Every CEO would lose their job if they acted honestly towards marketing "results". Moreover the bots generating clicks and such are paid for by someone -- the same funds paying for the ads. It's well understood that ad sales middlemen take their cut and send a little bit to bot farmers to make the ad campaign look more successful.

So as much as people want to preach on the singular issue of political influence or annoying comment sections, it comes from the same place as a multitude of other evils -- the need to pretend to drive ads to users on the internet as well as the need to pretend ads universally work.

I wonder if fake accounts could be shown to have a negative impact on advertising revenue, whether something would happen?

I for one, if I were going to spend targeted advertising revenue, wouldn't want it appearing alongside fake content.(Satire is a different matter, but I realise some people are stupid, so satire probably needs to clearly identify itself as such.)

I wonder if fake accounts could be shown to have a negative impact on advertising revenue, whether something would happen?

I for one, if I were going to spend targeted advertising revenue, wouldn't want it appearing alongside fake content.(Satire is a different matter, but I realise some people are stupid, so satire probably needs to clearly identify itself as such.)

Using software tools to recognize and block inauthentic actors is obviously the only way to capture it at any meaningful scale, but the development of those tools is clearly also still a work in progress.

That's an understatement. Any software smart enough to be able to do a decent job of content moderation would also probably be capable of simply saying "Heck with it" and getting itself elected as Leader of Earth in order to simplify the job.

The first part is also a somewhat disingenuous position marketed most famously by MZ when he sat of in front of Congress and claimed that the magical AI they are working on will solve all of FB's woes. Assuming that the solution needs to work for all billion users is ludicrous. When you own the platform, you can introduce certain structural changes in how the platform work to hinder false account creation, and to hinder the flow of disinformation. For US accounts, you can give the option to authenticate accounts using mobile phones or a credit card check. Unauthenticated accounts can have more friction in the reach of what they broadcast, etc... At the least, when your security chief reports foreign interference on your platform, you do not dismantle their team and drive them out.

But, but, total numbers of accounts is what gets reported, right? If 80% of them are fake, then advertiser's won't pay nearly as much.

I only have social media accounts out of curiosity. Haven't logged into them for at least 6, possibly 12, months. Have about 5 accounts per network, just to see how things work. None have my real name and none are connected to a phone. If they want to figure out who I really am, it wouldn't be hard, just a little effort, with some govt assistance.

That reminds me, isn't twitter going to delete inactive accounts in a few days? Suppose I need to fire up the VPN, login, switch the VPN exit node, login, switch the VPN exit node, login .... to keep them active.

If those accounts are gone, oh well. Live by the cloud, die by the cloud.

Using software tools to recognize and block inauthentic actors is obviously the only way to capture it at any meaningful scale, but the development of those tools is clearly also still a work in progress.

Another problem I see is that the filtering would have to be a very black box kind of endeavor. Otherwise, bad actors would be able to keep pace easily if they knew what they had to do to avoid detection. The other side of this is that people would throw an absolute bitch-fit about automated black box automation.

Social media has a vested financial interest in leaving fake accounts be, as the generate the "insightful commentary" that drives traffic and ultimately CPMs. It should be obvious cynics but it is disappointing this is going to be a thing.

The more capitalistic "communities" get the greater the risk to democracy we open ourselves to.

Its very easy to deal leave Facebook. You dont have to be on any social media. The Human race has lived for 1000s of years without it. Foaming at the mouth at the evils of capitalism does not change the fact that you have the choice not use any social media.

Yes and no.

Yes, in that I’m not on anything more ‘social media’ than the Ars forums right now, so I’m not exactly going to deny the possibility of doing what I’m already doing.

No in the sense that the “human race has lived for thousands of years without it” argument ignores such a vital element that I can’t pass it by without comment:

Obviously common access to Facebook and ubiquitous twitfeed and whatnot are recent phenomena. However, the great power and stickiness of something like Facebook isn’t in being some grand advance in social interaction(it pretty much entirely isn’t); but in being the place where people in your social circle do stuff, forcing you to go to nontrivial additional effort to keep in the loop by other means, or run the real risk of being left out/falling out of touch.

Some mechanism for staying connected with your social circle(or assembling a new one if circumstances change) is a thing that humans have had, because they make it for themselves one way or another if necessary, for probably longer than we’ve been classified as humans, if ape behavior is anything to go by.

As a socializing technology there’s not too much to be impressed by in Facebook; so it is easy enough to abandon it in that sense; but if the people relevant to you don’t abandon it then to abandon it yourself is to abandon your de-facto organizing social institution; a thing that humans have not historically done voluntarily when they can avoid it.

It seems to me that advertisers now have a decent basis to file fraud charges against social media companies they pay to deliver ads. Especially Facebook, whose ads are usually specific and directed, and they expect a HUMAN to see them.

We might not get reform through regulatory or statutory means, but we might see it if advertisers get pissed about paying to have their ads seen by real people, and they aren't.

A lot of people here are taking about banning as the solution, but I was thinking, wouldn't it be far easier if it was just made very difficult to sign up in the first place?

If you required a lot of ID info to sign up, some two step verifications, and removed the inability to post on other posts outside your own for x amount of time, that would make troll farms possibly impossible to set up.

From there you can then whittle down the fakes that remain quickly by asking for more identifying information.

Using software tools to recognize and block inauthentic actors is obviously the only way to capture it at any meaningful scale, but the development of those tools is clearly also still a work in progress.

Another problem I see is that the filtering would have to be a very black box kind of endeavor. Otherwise, bad actors would be able to keep pace easily if they knew what they had to do to avoid detection. The other side of this is that people would throw an absolute bitch-fit about automated black box automation.

You use the software to pre-filter. The greatly reduced number of detects then goes to a person for the final decision.

But again: all these trolls and bots are doing Facebook's work; they're increasing engagement, and FB thrives on that. There's no reason for FB to reduce their numbers - quite the opposite, in fact.

I've never really used it. Had a FB account for about a month several years ago that I shut down (or thought I had; it recently resurfaced all by itself and started spamming my old email account until I went in and shut it down again). And I don't think I'm all that unusual.

Now, I spend a fair amount of time right here in the comments section; I don't know if that counts as SM or not. But although we get our share of bots and trolls, they don't seem to predominate like they do elsewhere.

Fake growth is just as good a real growth if that’s what shareholders want.

Facebook can target advertising down to the number of hairs on your head, but “can’t” stop fakes? Of course they can, it’s just in their best interest not to. Could you imagine if it turns out that 25% of Facebook is fake and the whole this is a Potemkin City? Billionaires would loae millions.

twitter is the poster child of fake account. then fb is a close second.

Definitely true. As someone who uses Twitter as a sort of alternate newsfeed to RSS or Google News, I see so much fakery, and obvious self-liking and retweeting. Especially with stuff that's obviously trolling or hate speech, and just basically plain harassment.

Reporting doesn't seem to do all that much either, except once in a blue moon.

Edit: Facebook also seems to have a lot of that stuff, just not as frequently, it seems, and the name calling and idiocy tends to be just a little bit less bad too, in my admittedly limited experience.

Why would you use something that has such a low trust ratio for something as important as news?

We are often our on worst enemy.

Because the mainstream media (at least in the U.K.) is that bad, and that allergic to presenting any opinions outside of a very narrow elitist worldview.

If you’re BAME, live in the countryside, libertarian, poor or support people like Jeremy Corbyn then you are completely ignored. Once you add on the far right and stalinists who are quite rightly ignored there isn’t a lot of the population left.

Force users to go through a one-time online identity verification process. Such services cost less than $1.25. People tend to improve their behavior when they are no longer anonymous.

It could be first deployed to the heaviest users, or those posting on select controversial subjects like politics, just to see the effects... then decide how widely to deploy it.

But wasn't this Facebook's original claim to fame? That it verified account holders were real people - and therefore advertising dollars could be tied directly to a specific consumer? Didn't the pride themselves on demanding accounts use real names?

We've seen how that turned out.

I don't disagree with your suggestion, but implementing it in a meaningful way is a lot harder than it might sound. It would probably cost the troll/bot factories less than a nickel to generate all the needed background to pass a verification test, and we'd be right back where we are now.

twitter is the poster child of fake account. then fb is a close second.

Definitely true. As someone who uses Twitter as a sort of alternate newsfeed to RSS or Google News, I see so much fakery, and obvious self-liking and retweeting. Especially with stuff that's obviously trolling or hate speech, and just basically plain harassment.

Reporting doesn't seem to do all that much either, except once in a blue moon.

Edit: Facebook also seems to have a lot of that stuff, just not as frequently, it seems, and the name calling and idiocy tends to be just a little bit less bad too, in my admittedly limited experience.

Why would you use something that has such a low trust ratio for something as important as news?

We are often our on worst enemy.

Because the mainstream media (at least in the U.K.) is that bad, and that allergic to presenting any opinions outside of a very narrow elitist worldview.

If you’re BAME, live in the countryside, libertarian, poor or support people like Jeremy Corbyn then you are completely ignored. Once you add on the far right and stalinists who are quite rightly ignored there isn’t a lot of the population left.

No, you're not ignored. But you're not given the AAADoublePlus priority at the top of each and every page you think you deserve, and wind up getting relegated based on your actual importance to some lesser position.

Flat Earthers feel exactly the way you do, and love to go on about how they're brushed aside. Yet there's no shortage of reporting on them. It's just that such reporting tends to be truthful and exposes their sheer lunacy when it occurs, and it occurs in proportion to the movement's wider impact - which is to say, not much.

Facebook had previously reported that about 3 percent to 4 percent of its active users were fake. According to the new figures, the accounts taken down each quarter were equivalent to 25 percent to 35 percent of its active users (though those accounts were not counted in Facebook’s active-user tallies because they had been removed).

More curious was how Facebook’s estimate of active fake accounts barely budged even as the number of accounts it took down each quarter fluctuated widely. For instance, Facebook said it had caught 583 million fake accounts in the first quarter of 2018 and 800 million the next quarter. Yet between those two quarters, it told investors that its active fake accounts had increased by roughly one million.

“ A majority of the accounts found in this study were engaged in commercial behavior rather than political troublemaking.”

IOW, ordinary fraud is rampant on Facebook. If *I* ran Facebook, I would allow that only if my users *LIKED* being lied to about products’ quality, value, desirability, etc. Because it directly substitutes for paid ad revenue and lowers SOME people’s trust in what I see, to below zero. Why my account is so inactive

“ A majority of the accounts found in this study were engaged in commercial behavior rather than political troublemaking.”

IOW, ordinary fraud is rampant on Facebook. If *I* ran Facebook, I would allow that only if my users *LIKED* being lied to about products’ quality, value, desirability, etc. Because it directly substitutes for paid ad revenue and lowers SOME people’s trust in what I see, to below zero. Why my account is so inactive

What you are describing though is the history of advertising. Lying rampantly. If the president can do it, why not Walmart?

If Facebook won't fix the problem because there is no incentive, then maybe we should create one? I wonder how many fake Zuckerbeg accounts or fake employee accounts spewing fake bad news about the company it would take before the company does something about this problem

If Facebook won't fix the problem because there is no incentive, then maybe we should create one? I wonder how many fake Zuckerbeg accounts or fake employee accounts spewing fake bad news about the company it would take before the company does something about this problem

I can guarantee you that pretty much every single post or account containing the words "zuck," "facebook," or "FB" are flagged and scrutinized syllable by syllable by an actual human censor.

Unlike any of the troll. bot, or fake news accounts and stories that permeate the platform.