Zuckerberg delivers a progress report on election security

Since the aftermath of the 2016 election, Facebook has invested millions of dollars in an effort to shore up the platform against future attacks. Late Wednesday night, Mark Zuckerberg published a 3,300-word progress report on how the company has been doing.

The report contained little in the way of news. The steps that Zuckerberg outlined have been announced publicly every step of the way. They include:

Removing fake accounts.

Removing posts that use hoaxes to incite violence.

Preventing publishers of hoaxes from selling ads against their content.

Requiring advertisers to verify their identities and allowing the public to see relevant information about all ad campaigns on Facebook.

Setting up an independent election research commission to let outside academics examine the influence of social networks on democracy.

Coordinating with other social platforms and the government to identify and remove influence campaigns.

Zuckerberg concludes:

In 2016, we were not prepared for the coordinated information operations we now regularly face. But we have learned a lot since then and have developed sophisticated systems that combine technology and people to prevent election interference on our services.

This effort is part of a broader challenge to rework much of how Facebook operates to be more proactive about protecting our community from harm and taking a broader view of our responsibility overall.

Given that we don’t fully understand the nature of the threats on the platform — many are uncovered only after the fact — it’s impossible to say with any certainty how effective these measures have been. Still, Zuckerberg’s post highlights a series of earnest, good-faith efforts at Facebook to prevent the problems that marred the 2016 election from happening again. This list compares favorably to efforts at YouTube and Twitter, which generally have been slower to act and less forthcoming about what they’re doing.

At the same time, a post like Zuckerberg’s can encourages us to assess the company’s efforts by how hard Facebook is trying. Because it is written by the founder, Zuckerberg’s note has the feel of a quarterly self-evaluation. You read all 3,300 words and think, gosh, he’s working hard on the problem! Which, of course, he is.

But I think this is the wrong way to think about things.

Earlier this week I wrote about the limits of CEO interviews, which center the feelings of the founder rather than the consequences of their actions. Then the Guardian’s Julia Carrie Wong came along and put it better than I did: “It’s time for tech journalism to move away from the idea that we can understand this industry by understanding the great men who built it,” she tweeted. “What does it matter that we understand Zuck when Zuck himself so clearly doesn’t even understand Facebook?”

The indispensable Matt Levine picks up on Wong’s tweet in his newsletter today and extends the argument:

No one at Facebook sat down to build an election interference function. They sat down to build a system for purposes that they thought were good, and are happy to brag to you about: sharing baby pictures, connecting the world, making piles of money by showing you ads, that sort of thing. All — most, anyway — of the bad effects of Facebook are emergent features of the system that they built for the good effects; that system itself, and its messy interactions with billions of people out in the real world, creates the bad effects.

I don’t mean to claim that Zuckerberg, or anyone else at Facebook, is or is not responsible in some moral or legal sense for the bad effects of Facebook, or that those effects could or could not or should or should not have been predicted, or that they can or can’t be fixed, or whatever. I just mean to endorse Wong’s claim that if you want to understand Facebook, the main thing you have to understand is Facebook, the product and architecture and algorithms and effects and interactions, the system of it. Understanding the people who built it is not a substitute for that, because the system has moved beyond their conscious control. Facebook does things in the world that are not directly willed by the people who built it; to understand and predict those things, you don’t interview its founder, you examine its workings.

This is why I reject Zuckerberg’s idea that the fight against bad actors on Facebook is an “arms race.” The military metaphor is helpful to Facebook in part because it’s so easy to visualize. The Kremlin builds one missle; Facebook builds a bigger one. This metaphor suggests that the sides are of equal power: the good guys and the bad guys are fighting neck and neck, with the lead swinging back and forth depending on the day. Facebook uses the “arms race” language, in other words, because it flatters Facebook.

But the other view — Levine’s emergent-systems view — doesn’t allow for such a rosy assessment. Building a bigger arsenal — of artificial-intelligence tools, or advertiser requirements, or whatever — won’t necessarily meet the challenge ahead. This isn’t conventional warfare; it’s guerilla warfare. It’s not the Cold War, where “arms race” first entered our vocabulary; it’s the Vietnam War.

And I probably don’t have to tell you how the imperial power fared in that one.

Jack Poulson worked for Google’s research and machine intelligence department, working to improve the accuracy of search results, reports Ryan Gallagher. He quit recently in protest of Google’s China plans, along with “about five” other employees. (Another good note in this story: Google has locked down access to its weekly all-hands live streams, following leaks like yesterday’s.)

In his resignation letter, Poulson told his bosses: “Due to my conviction that dissent is fundamental to functioning democracies, I am forced to resign in order to avoid contributing to, or profiting from, the erosion of protection for dissidents.”

“I view our intent to capitulate to censorship and surveillance demands in exchange for access to the Chinese market as a forfeiture of our values and governmental negotiating position across the globe,” he wrote, adding: “There is an all-too-real possibility that other nations will attempt to leverage our actions in China in order to demand our compliance with their security demands.”

Here’s some more pressure on Google to address concerns about its Chinese search engine, from David Shephardson:

Sixteen members of the U.S. House of Representatives, including liberal Democrats and conservative Republicans, said in a letter they had “serious concerns” about the potential step and asked Google if it would agree to restrict certain words, terms or events in China. The company did not immediately comment on Thursday.

Facebook said Thursday it would expand its efforts to scan photos and videos uploaded to the social network for evidence that they’ve been manipulated, as lawmakers sound new alarms that foreign adversaries might try to spread misinformation through fake visual content.

In 17 countries, including the United States, Facebook said it has deployed its powerful algorithms to “identify potentially false” images and videos, then send those flagged posts to outside fact-checkers for further review. Facebook said it’s trying to stamp out content that has been doctored, taken out of context or accompanied by misleading text.

James Vincent and Russell Brandon have a new explainer about the Copyright Directive, which is threatening to wreak havoc on the internet:

Both measures attempt to redress an imbalance at the core of the contemporary web: big platforms like Facebook and Google make huge amounts of money providing access to material made by other people, while those making the content (like music, movies, books, journalism, and more) get an ever-shrinking slice of the pie.

Not everyone involved in the creative industry is complaining about this, obviously. It’s benefited a lot of people, and a lot of internet users. But it’s obvious that the modern, ad-supported web has left companies in Silicon Valley extremely rich while torpedoing revenue in other industries. The Copyright Directive is supposed to level the playing field.

James Poniewozik examines the president’s new fondness for video tweets:

The aesthetics of the Rose Garden videos are more YouTube than NBC, unadorned by graphics or soundtrack. (This stands in contrast with the White House’s more heavily produced videos, slathered with epic trailer music.) Mr. Trump stands stiffly, glowering at the camera, and speaks about trade, the stock market, the negotiations with North Korea. (“Nothing bad can happen. It’s only going to be positive.”)

Did I say “speaks”? “Barks” is more like it. At a rally, Mr. Trump can vary his tone, playing off the audience and its response — his voice rising into outrage and dropping into cutting asides. But alone with a silent camera, he falls into a single “Everything must go!” salesman mode, whether he’s throwing red meat about immigrants to a xenophobic base or pitching red meat for The Sharper Image.

Nicole Wong worked at Google and Twitter before becoming deputy CTO of the United States. She spoke with Kara Swisher about why platforms should make their chief objectives “accuracy, authenticity and context,” rather than mere personalization:

WONG: Facebook’s in this really awkward position where it’s trying to have a global platform and one set of rules imposed consistently. And the fact of the matter is that every understanding of content is incredibly nuanced from a perspective of what it is in the culture, what it is in the political system, how the legal environment handles a content problem. And so I know what they’re trying to do, and I understand that that’s the only way to scale it, I just think it’s really hard.

And I will say that as someone who gets to say that in hindsight, ’cause I’m not that decider anymore. And in the days when I did it, it was millions, not billions of users, right? It was hundreds of … I don’t remember, it was like … in the tens or scores of hours per minute on YouTube, not in the hundreds of hours of content on YouTube. And so, I actually had the time to say … my folks would level up something for me to see, and I would get a day to sort of think about it and get some more information about like, “Well, what does this mean in India? What are the ramifications?” and to touch base with people in India to say, “Should I do this or that?” They appear not to have that latitude anymore, and what I’m hearing is that they have four or five seconds per piece of controversial content to make a decision.

Mark Bergen and Austin Carr wonder what the reclusive CEO of Alphabet has been up to during a time of angst at his company. (It turns out that he has been relaxing on his island.)

A slew of interviews in recent months with colleagues and confidants, most of whom spoke on condition of anonymity because they were worried about retribution from Alphabet, describe Page as an executive who’s more withdrawn than ever, bordering on emeritus, invisible to wide swaths of the company. Supporters contend he’s still engaged, but his immersion in the technology solutions of tomorrow has distracted him from the problems Google faces today. “What I didn’t see in the last year was a strong central voice about how [Google’s] going to operate on these issues that are societal and less technical,” says a longtime executive who recently left the company. [….]

What’s occupying Page’s time today? People who know him say he’s disappearing more frequently to his private, white-sand Caribbean island. That’s not to imply that, at 45, he’s already living the daiquiri lifestyle. He still oversees each Alphabet subsidiary, though the extent of his involvement is vague. Along with Google co-founder Sergey Brin, who’s now Alphabet’s president, Page even occasionally holds court at the company’s weekly all-hands “TGIF” meetings at its Mountain View, Calif., headquarters. He sometimes fields questions from employees, though he mostly defers to Pichai and other corporate leaders, according to current Googlers. Page has reached a point where he takes on only rare projects that deeply fascinate him, like the sci-fi pursuits at X, Alphabet’s secretive research lab.

Fake plays are the fake news of music, and increasingly they may be distorting the Billboard charts, Blake Montgomery reports:

While manipulating streaming plays is becoming a more widely used tactic, it’s unclear just how much of an impact it can have on Drake-level artists. But even if it’s just a drop in the bucket, the fraud could erode the veracity of the widely respected Billboard chart metrics, especially since the fan campaigns appear to be getting more sophisticated. Harry Styles fans weaponized Tumblr accounts and VPNs to promote his first solo single and album in 2017, but BTS fans took the blueprint further, creating tests for wannabe helpers to verify their devotion.

It’s not just the US, either: Rampant allegations of chart manipulation in South Korea recently triggered an investigation by the Ministry of Culture, Sports and Tourism.

A small group of partners can now integrate public Snapchat stories into their news coverage:

Snap is expanding its crowdsourced Our Story content to its various media partners, including CNN, NBC News, and NowThis, who’ll be able to draw on the user-submitted images and videos for their own Snap stories, according to a report from Deadline.

Introduced back in 2015, the Our Story feature on Snapchat collects Snaps that users submit for specific categories, places, events, or topics — think a sports game or a protest rally — into a single, curated Snap story that offers a wider, on-the-ground perspective for a particular event.

Parents can now handpick which videos they want their children to see in the YouTube Kids app, rather than subject their children to its algorithmic choices. There’s now an “older” and a “younger” version of YouTube Kids, with the former geared to 8-12 year-olds.

Facebook has a new tool that uses AI to suggest fixes to broken code, Josh Constine reports:

Facebook has quietly built and deployed an artificial intelligence programming tool called SapFix that scans code, automatically identifies bugs, tests different patches, and suggests the best ones that engineers can choose to implement. Revealed today at Facebook’s @Scale engineering conference, SapFix is already running on Facebook’s massive code base and the company plans to eventually share it with the developer community.

“To our knowledge, this marks the first time that a machine-generated fix — with automated end-to-end testing and repair — has been deployed into a codebase of Facebook’s scale” writes Facebook’s developer tool team. “It’s an important milestone for AI hybrids and offers further evidence that search-based software engineering can reduce friction in software development.” SapFix can run with or without Sapienz, Facebook’s previous automated bug spotter. It uses them in conjunction with SapFix suggesting solutions to problems Sapienz discovers.

Fast forward to today, when Google is being falsely accused of censoring speech in the United States, when what it is really doing is mulling a return to censorship in China.

If this makes you pause, it should, and Washington politicians should take all their sanctimony and direct it at the China issue, which actually deserves some scrutiny. Perhaps that is the real reason Google avoided sending its current chief executive, Sundar Pichai, to the recent Senate hearings, so he could avoid explaining what it was thinking when it came China 2.0: Now With 100 Percent More Hypocrisy.

Responding to this week’s fight between ThinkProgress and the Weekly Standard, Alexios Mantzarlis says Americans’ tendency to view everything through a partisan lens is obscuring important questions about fact-checking on Facebook:

How literal should their fact-checking on Facebook be? What should it primarily target? How can it be effectively appealed? And how can all these rules work across the more than 15 countries the tool is currently active in?

These are all questions fact-checkers and Facebook are asking themselves, but not ones there has been a serious public debate over. This would have been a great moment for heavy-hitting media critics to weigh in. Instead, most abdicated their responsibility, content to let the debate be dominated by narrow interpretations along the lines of “Facebook is catering to conservatives.”

Stefan Heck has a beautiful piece about all of the dumb-dumbs who come out of the woodwork when a joke tweet goes viral. I’m somewhat sympathetic to these dumb-dumbs — Twitter abhors context, and so it can be very difficult to tell which outraged tweets are made in earnest, and which are put-ons. But Heck’s piece is good and funny and he has about as apt a description of a day on Twitter as any I’ve seen:

If you aren’t familiar with how Twitter works, each morning, somebody posts something stupid. The rest of Twitter takes turns pummeling this person into submission. Then, we forget what we were mad about and do it all over again the next day.

Looking forward to tomorrow!

Talk to me

Send me tips, comments, questions, corrections, and what you are doing to protect the midterm elections: casey@theverge.com.