On Social Media, Lax Enforcement Lets Impostor Accounts Thrive

Image

Erin Barnes, a publicist and writer, learned that an automated “bot” account on Twitter was using her name, her portrait and a photograph of her husband and daughter. “It makes my skin crawl,” she said.CreditCreditNick Cote for The New York Times

When Hilary Mason, a data scientist and entrepreneur, discovered that dozens of automated “bot” accounts had sprung up to impersonate her on Twitter, she immediately set out to stop them.

She filed dozens of complaints with Twitter, repeatedly submitting copies of her driver’s license to prove her identity. She reached out to friends who worked at the company. But days later, many of the fake accounts remained active, even though virtually identical ones had been shut down.

Millions of accounts impersonating real people roam social media platforms, promoting commercial products and celebrities, attacking political candidates and sowing discord. They spread fake images and misinformation about the school shooting last week in Parkland, Fla. They were central to Russian attempts to sway the 2016 presidential election in favor of Donald J. Trump, according to a federal grand jury indictment on Friday. And American intelligence officials believe they will figure in Russian efforts to shape the coming midterm elections, too.

Yet social media companies often fail to vigorously enforce their own policies against impersonation, an examination by The New York Times found, enabling the spread of fake news and propaganda — and allowing a global black market in social identities to thrive on their platforms.

Facebook and Twitter require proof of identity to shut down an impostor account but none to set one up. And even as social media accounts evolve into something akin to virtual passports — for shopping, political activity and even gaining access to government services — technology companies have devised their own rules and standards, with little oversight or regulation from Washington.

“These companies have, in a lot of ways, assigned themselves to be validators of your identity,” said Jillian York, an official at the Electronic Frontier Foundation, which advocates digital privacy protections. “But the vast majority of users have no access to any due process, no access to any kind of customer service — and no means of appealing any kind of decision.”

In congressional hearings last week, some lawmakers questioned whether social media businesses were doing enough.

“I think the companies themselves were slow to recognize this threat,” said Senator Mark Warner, Democrat of Virginia. “I think they’ve still got more work to do.”

Leaders of some social media companies have said they are trying hard to grapple with impersonation. In an earnings call this month, Jack Dorsey, Twitter’s chief executive, said the company was expanding what it calls “information quality” efforts, including ways of elevating credible and authentic content on the platform.

Facebook’s terms of service prohibit impersonation and require that account holders generally use their real names. Twitter, however, allows parody accounts and pseudonyms, and only forbids impersonation when that account portrays another user “in a misleading or deceptive manner.” The company does not proactively review accounts for impersonation.

That policy can leave real users mystified or enraged. In December, Firoozeh Dumas, an Iranian-American memoirist who lives in Germany, repeatedly reported to Twitter at least four accounts impersonating hers. “They have my photos, they tweet things from my books,” she said. “One of them seems to be selling things.”

Yet each time Ms. Dumas reported them, emails show, Twitter’s support team told her the accounts did not meet its definition of abusive impersonation.

A Times investigation last month found that many real accounts are copied and turned into automated “bots” sold by companies like Devumi, a firm now based in Colorado that is under investigation by attorneys general in Florida and New York. (Through a spokesman, Devumi has denied selling fake accounts.)

One victim was Erin Barnes, a publicist and writer in Colorado. A bot sold by Devumi used not only her name and portrait, but also a background photograph of her husband and young daughter. “It makes my skin crawl,” Ms. Barnes said.

The account was suspended only recently, after Ms. Barnes — alerted by The Times — reported it to Twitter. “If you’re using somebody’s photos and name together, then that’s impersonation,” she complained.

Twitter appears to be tracking Devumi’s network of bots. Since the Times investigation was published, dozens of Devumi’s most prominent clients — actors, reality TV stars, authors, business executives and others seeking to buy followers and retweets — have lost more than three million followers. Close to 55,000 impostor accounts sold by Devumi have been restricted or suspended.

Twitter has declined to say whether Devumi’s bots violate its impersonation policy, or how many of its employees are focused on rooting out impersonation. The company’s first line of defense against impersonation is the countermeasures that flag accounts that run afoul of Twitter policies on spam — violations that can be easier for the platform to identify and stop at large scale.

But impostor accounts are still relatively easy to find on Twitter. The Times identified hundreds more of them through Twitter’s own automated “who to follow” feature: When a user views a known impostor account, Twitter routinely recommends other impostor accounts to follow.

One real Twitter account, belonging to Jasmine Artis, a health care worker from North Carolina, was cloned dozens of times. At least 75 of those impostor accounts still exist — though some have recently been restricted — each using her picture, her name and a brief bio that refers to the school she was attending when her account was copied. Most of the clones have made only a handful of posts, some in Russian or Japanese. Ms. Artis said she had not been aware of the accounts.

Even Twitter’s “verified” users, many of them well known, are being impersonated. There are active fake accounts impersonating the Democratic senator Cory Booker of New Jersey, the White House adviser Kellyanne Conway and the journalist April D. Ryan. None appeared to be obvious parodies. Instead, each posted content that mimicked what the real account might tweet.

Both Facebook and Twitter rely in part on their users to report impostors and abuse. But the companies’ enforcement decisions can seem arbitrary. In January, Antonia Caliboso, a social worker in Seattle, discovered an impostor Facebook account using her name, biographical information and a photo, all lifted from a 2013 news release about a fellowship she had won.

Image

Antonia Caliboso, a social worker, deleted her real Facebook account after repeated attempts to get the social media platform to take down a phony profile impersonating her.CreditRuth Fremson/The New York Times

Ms. Caliboso and dozens of her friends reported the fake account to Facebook over a period of weeks. But Facebook representatives repeatedly told her the account did not violate the company’s impersonation policies, she said. Eventually, to protect herself, Ms. Caliboso deleted her real account.

“I can’t risk that a client or employer — past, present or future — might find that profile,” she said.

Last week, Facebook reversed its position: The account was shut down, according to an email Ms. Caliboso received on Feb. 10, “because it goes against our community standard on identity and privacy.”

Social media companies succeed in the marketplace by amassing as many active users as possible, so most make it relatively easy to create new accounts. Neither Facebook nor Twitter requires proof of identity to open a new account, but both require it when reporting impostors. Decisions about whether to take down or suspend an account are automated wherever possible, or farmed out to teams of low-level employees and contractors around the world.

As a result, on most social media platforms it is far easier to build a bot than to kill one.

“The reason that they’re so bad at this is that it conflicts with their business model,” said Zeynep Tufekci, a sociologist and technology expert at the University of North Carolina. “They try to come up with one set of rules that applies to two billion people, and then simplify it so that a contractor in a warehouse in the Philippines can go click-click-click and apply the policy.”

Ms. Mason, the data scientist, eventually took matters into her own hands. After she filed dozens of impersonation reports to Twitter in 2015, the company suspended many of the impostors, but left dozens untouched.

So she created more than 100 bots of her own, using as many variations of her name as she could think of. Each linked back to her real account, with the message “the real hmason is over there.” Eventually, Ms. Mason’s homegrown bots, and Twitter’s own efforts, seemed to overwhelm the impostors, she said, and the surge in fake accounts stopped.

“With Twitter, it was easy” to make bots, Ms. Mason said. “I was shocked by how easy it was.”

A version of this article appears in print on , on Page A1 of the New York edition with the headline: How Lax Enforcement Breeds Impostors Online. Order Reprints | Today’s Paper | Subscribe