Twitter finds an enemy within

One reason I write this newsletter about social networks is to cover the new and exotic methods that state actors employ to bend the public to their will. Much of the conversation over the past two years has been around “troll farms” or “troll armies” — essentially, remote workforces that attempt to wreak havoc from their laptops on targets around the world.

Twitter executives first became aware of a possible plot to infiltrate user accounts at the end of 2015, when Western intelligence officials told them that the Saudis were grooming an employee, Ali Alzabarah, to spy on the accounts of dissidents and others, according to five people briefed on the matter. They requested anonymity because they were not authorized to speak publicly.

Mr. Alzabarah had joined Twitter in 2013 and had risen through the ranks to an engineering position that gave him access to the personal information and account activity of Twitter’s users, including phone numbers and I.P. addresses, unique identifiers for devices connected to the internet.

Perhaps it had previously occurred to you that state actors would attempt to recruit engineers and other social-network employees as spies. I spent less time thinking about it than I probably should have! In any case, it’s chilling, and had real-world consequences. Alzabarah — who was fired, and now reportedly works for the Saudi government — accessed dozens of accounts, as part of a wide-ranging effort to identify the kingdom’s most influential critics and intimidate them into silence.

Another part of this effort involved the consulting company McKinsey, best known as the place where your college friends spend two lazy postgraduate years before business school. As the New York Times reported, McKinsey assembled a 9-page report on the Saudis’ behalf naming prominent Saudi dissidents. One of the men named was arrested, along with two of his brothers, and the account of an anonymous critic was shut down. (McKinsey denied everything, rather weakly.)

Facebook has spoken often in the past about the strict controls it places around user accounts in an effort to thwart the kind of attack that Alzabarah mounted. Every time a user’s data is accessed, Facebook logs which employ did so, and regularly audits the logs looking for suspicious behavior.

In the wake of Trump’s account deactivation shortly before 10PM ET on Thursday, former employees gathered in a private Slack that they use to discuss the company’s travails. The rogue employee, who has not been identified, was an immediate source of fascination. “We’re now referring to this individual as ‘the legend,’” one former employee told The Verge. At the same time, the former employee was not surprised by the incident. “People have ‘dropped the mic’ in the past and deleted accounts, verified users, and otherwise abused their power on the last day,” the employee said. In each case, the employee said, the abuse was caught quickly and did not become public.

These “mic drops” were possible because of the broad availability of customer support tools inside Twitter. The company won’t say how many people have access to the tools necessary to deactivate an account like Trump’s — and after today, the number is likely much lower. But up until now, as many as hundreds of people have had access to the tools, which let employees see a broad range of information about the account. The access does not allow employees to send tweets from other users’ accounts, or to read a user’s direct messages.

The man was eventually revealed to be a German citizen named Bahtiyar Duysak. He said that he had made a mistake. Still, when considered in light of the Times’ story about spying, it ought to give pause to the large group of people who use Twitter as a tool for activism.

It ought to give pause to other social networks, as well. I asked around for other public cases in which a social network had caught a spy in its ranks, and came up empty. But it’s a safe bet that others have attempted the playbook that the Saudis have, and possibly succeeded — at Twitter and elsewhere. For activists who risk their freedom when they tweet, it’s a chilling reminder to take extra steps to protect their identities, lest they wind up in the next McKinsey report. And for Twitter, it’s another major embarrassment in a year that has had too many of them.

Adam Satariano investigates more Facebook dark money: a group pushing Britain to exit the European Union in much starker terms than it has planned. Facebook says it will soon require British advertisers to confirm and disclose their real identities:

In the past 10 months, the organization spent more than 250,000 pounds on ads pushing for a more severe break from the European Union than Mrs. May has planned. The ads reached 10 million to 11 million people, according to a report published on Saturday by a House of Commons committee investigating the manipulation of social media in elections.

The ads, which disappeared suddenly this week, linked to websites for people to send prewritten emails to their local member of Parliament outlining their opposition to Mrs. May’s negotiations with the European Union.

The Digital Forensics Research Lab digs in on the October 19th indictment of a Russian national in connection with an effort to interfere in the US midterm elections. Key point: Russia is spending more on its campaign this year than it did in 2016. (Fake accounts are getting more expensive!)

The first financial detail included in the criminal complaint against Elena Khusyaynova showed that between January 2016 and June 2018, Project Lakhta’s proposed operating budget was more than two billion Russian rubles ($35 million USD). In the first half of 2018, the proposed operating budget was 650 million Russian rubles (over $10 million USD).

Put simply, the budget for first half of 2018 nearly matched the total troll farm budgets from 2016 and 2017. The itemized budget requests, which Khusyaynova allegedly organized, increased every single month in 2018.

Sue Halpern surveys the political landscape post-Cambridge Analytica and finds any number of companies still invested in the same kind of psychographic targeting. And much of it looks much more invasive, on the surface, than anything Cambridge Analytica did:

Judging personalities, measuring voice stress, digging through everything someone has ever said—all of this suggests that future digital campaigns, irrespective of party, will have ever-sharper tools to burrow into the psyches of candidates and voters. Consider Avalanche Strategy, another startup supported by Higher Ground Labs. Its proprietary algorithm analyzes what people say and tries to determine what they really mean—whether they are perhaps shading the truth or not being completely comfortable about their views. According to Michiah Prull, one of the company’s founders, the data firm prompts survey takers to answer open-ended questions about a particular issue, and then analyzes the specific language in the responses to identify “psychographic clusters” within the larger population. This allows campaigns to target their messaging even more effectively than traditional polling can—because, as the 2016 election made clear, people often aren’t completely open and honest with pollsters.

“We are able to identify the positioning, framing, and messaging that will resonate across the clusters to create large, powerful coalitions, and within clusters to drive the strongest engagement with specific groups,” Prull said. Avalanche Strategy’s technology was used by six female first-time candidates in the 2017 Virginia election who took its insights and created digital ads based on its recommendations in the final weeks of the campaign. Five of the six women won.

Snapchat is a surprisingly popular place for kids to get news, according to new data from the Knight Foundation:

In a survey of 5,844 college students from 11 US institutions, 89 percent said they got at least some of their news from social media over the previous week. And Facebook was the most popular outlet, with 71 percent of respondents saying they got news from the platform during that time period. Interestingly, Snapchat came in second place, with 55 percent of the students saying they had gotten news from the app during the past week. And YouTube, Instagram and Twitter followed, pulling 54 percent, 51 percent and 42 percent of respondents, respectively.

Ryan Broderick looks at the political success that a group of YouTubers have had getting elected to Congress in Brazil:

Kim Kataguiri is known in Brazil for a lot of things. He’s been called a fascist. He’s been called a fake news kingpin. Is he a YouTuber? He definitely usesYouTube. He’s definitely a troll. A troll with a consistent message, though, he points out. Maybe he’s Brazil’s equivalent of Milo Yiannopoulos. His organization, Movimento Brasil Livre (MBL) — the Free Brazil Movement — is like the Brazilian Breitbart. Or maybe it’s like the American tea party. Maybe it’s both. Is it a news network? Kataguiri says it isn’t. But it’s not a political party, either. He says MBL is just a bunch of young people who love free market economics and memes.

One thing is very clear: His YouTube channel, the memes, the fake news, and MBL’s army of supporters have helped Kataguiri, 22, become the youngest person ever elected to Congress in Brazil. He’s also trying to become Brazil’s equivalent of speaker of the House.

This legislation poses a threat to both your livelihood and your ability to share your voice with the world. And, if implemented as proposed, Article 13 threatens hundreds of thousands of jobs, European creators, businesses, artists and everyone they employ. The proposal could force platforms, like YouTube, to allow only content from a small number of large companies. It would be too risky for platforms to host content from smaller original content creators, because the platforms would now be directly liable for that content. We realize the importance of all rights holders being fairly compensated, which is why we built Content ID and a platform to pay out all types of content owners. But the unintended consequences of article 13 will put this ecosystem at risk. We are committed to working with the industry to find a better way. This language could be finalized by the end of the year, so it’s important to speak up now.

Brendan Iribe, who led Facebook-owned VR company Oculus until 2016 before moving to lead its PC VR division, is leaving. He’s the 10th high-ranking Facebook to quit this year. Also leaving — an Interface exclusive! — is Oculus’ head of diversity and inclusion, Ebony Peay Ramirez. Ramirez, who worked at Oculus for four years, had her last day on Friday.

Iribe was an Oculus co-founder, helping Rift inventor Palmer Luckey to launch the experimental headset on Kickstarter in 2012. He served as CEO until 2016, when he stepped down to lead Oculus’ PC-based Rift VR division, and the CEO position was replaced by a “Facebook VP of VR” role held by Hugo Barra. Iribe was conspicuously absent at last month’s Oculus Connect conference, where fellow co-founder Nate Mitchell handled press interviews — and where PC-based VR was basically an afterthought, compared to standalone mobile headsets. VRFocus confirms that Mitchell will lead the division going forward.

Reed Albertgotti and Sarah Kuranda say Facebook wants to make a big cybersecurity acquisition that it can point to during its next Congressional hearing.

The company’s push comes in the wake of a devastating security breach that affected 30 million users, an incident that added to a litany of security and privacy concerns swirling around the social media company in recent months. Facebook is betting that a splashy acquisition of a security company might serve the dual purpose of bolstering its talent in that field and delivering a much-needed public relations win. It formed a team of people inside its corporate development department to search for cybersecurity companies that might be willing to be acquired, said one of the people familiar with Facebook’s strategy. A Facebook spokeswoman declined to comment.

Nathaniel Popper looks at some blockchain companies that could challenge Google and Facebook. Or, at the very least, be acquired by them!

Ocean Protocol, a project based in Berlin, is building the infrastructure so that anyone can set up a marketplace for any kind of data, with the users of data paying the sources with digital tokens.

Unlike Google and Facebook, which store the data they get from users, the marketplaces built on Ocean Protocol will not have the data themselves; they will just be places for people with data to meet, ensuring that no central player can access or exploit the data.

How did I miss this amazing story about how the outgoing CEO of NextDoor, an extremely dumb social network for freaking out when you see a stranger walking down the block, getting mad about an extremely funny Twitter account that posts said freakouts? Well, I did. Read it:

“I was surprised [that] this was the first time Nirav has publicly acknowledged @bestofnextdoor!” Jenn Takahashi, the parody account’s creator, tells The Verge. “I heard through the grapevine that he wasn’t a fan of the account, and I’m still not sure why.

“I did meet up with the [Nextdoor] head of community recently and really tried to emphasize that I’m not trying to take them down or anything,” she adds. “I only post things to make people laugh, and I do my best to retract private info and protect their users’ privacy. I get a ton of really depressing submissions (I’m sure you can imagine), but I don’t post those, because I’m just trying to bring a little bit of levity back to the internet.”

David Kirkpatrick, who wrote the defining early history of Facebook, reconsiders his book in light of the past two years. He finds the company too slow to act and too defensive, with no clear answers for what should come next.

The last 150 years of global progress towards universal democracy may be imperiled. But it’s not only Facebook’s fault. And the company can’t fix the problems alone. Karen Kornbluh served as U.S. ambassador to the Organization for Economic Cooperation and Development (OECD) under President Barack Obama and is now senior fellow for digital policy at the Council on Foreign Relations. “The leaders of Facebook are being asked by the market to generate growth and continued profits,” Kornbluh explains, “but so far there’s no clear ask from society or government to do anything different. Their motto of ‘move fast and break things’ made sense for an internet that was a tiny piece of the economy and society. But when our whole lives moved online, we needed to have a societal conversation. And we didn’t have that. Shame on all of us. So the question, really, is what is society going to do?”

Reviewing three recent books about labor, the economy, and Silicon Valley giants, Nitasha Tiku reconsiders the meaning of “disruption.”

It is only now, a decade after the financial crisis, that the American public seems to appreciate that what we thought was disruption worked more like extraction—of our data, our attention, our time, our creativity, our content, our DNA, our homes, our cities, our relationships. The tech visionaries’ predictions did not usher us into the future, but rather a future where they are kings.

They promised the open web, we got walled gardens. They promised individual liberty, then broke democracy—and now they’ve appointed themselves the right men to fix it.

Gary Marcus and Ernest Davis, professors of neural science and computer science, respectively, tell Facebook not to rely on artificial intelligence to clean up the News Feed:

To get to where Mr. Zuckerberg wants to go will require the development of a fundamentally new A.I. paradigm, one in which the goal is not to detect statistical trends but to uncover ideas and the relations between them. Only then will such promises about A.I. become reality, rather than science fiction.

And finally ...

Today we celebrate three incredible tweets, in ascending order.

You have to know your video games to understand Elon Musk’s social-networking analogies, but even then it barely coheres as an idea. I’m sharing this mostly because I find it extremely amusing that Elon Musk played Bloodborne, one of the hardest games I have ever played, and thought to himself, “this is exactly like Twitter.”