The new dark money behind Facebook's political ads

I was on a reporting assignment today and had less time to read than usual. If something obvious seems missing here, hopefully you’ll see it in tomorrow’s edition.

Just over a year ago, Facebook announced it would create a database of advertising and make it available for the benefit of researchers, journalists, and the public. After a successful test in Canada, Facebook introduced the archive to the United States earlier this year, and plans to introduce a modified version in the United Kingdom shortly.

Of all the steps Facebook has taken in the wake of the 2016 election to improve trust in the platform, the political ads archive has been among the most effective. It allows anyone to see what ads are running, how much money is being spent on them, and who is being targeted by them. It also requires anyone who wants to buy political ads to register with a government ID, using a code mailed to their address.

Collectively, the ads tell a story about how people are using Facebook to influence behavior, while taking steps to ensure advertisers are who they say they are.

Facebook has said that it plans to improve the archive over time. And in the weeks before the US election, some significant flaws have appeared.

The Daily Beast linked MotiveAI to an entity known as News for Democracy, which creates political advocacy ads and uses them to promote 14 pages that it owns. The ads, which to date have largely featured testimonials about the benefits of universal healthcare, have been mostly targeted at women between the ages of 55 and 64 in Arkansas, and mostly male Kansans under the age of 44, Madrigal reports.

In September alone, the company spent almost $400,000 on more than 16 million impressions. It’s one of the largest political advertisers on Facebook — and were it not for some sleuthing by reporters, we would have no idea who it is.

If you’re a fan of universal healthcare, this may not seem so scary. But Kevin Roose has a story today illustrating the flip side of this obfuscation. In Virginia’s 10th Congressional District, an unknown person is buying political ads that portray the Democratic candidate, Jennifer Wexton, as (among other things) a Nazi:

The ads paint Ms. Wexton as an “evil socialist,” with language and imagery not typically found in even the roughest campaigns. In one ad, which began running on Monday, Ms. Wexton is pictured next to an image of Nazi soldiers, and the ad’s text refers to her supporters as “modern-day brown shirts.” In another, which first ran this month, Ms. Wexton is compared to Christine Blasey Ford, the woman who accused the new Supreme Court justice, Brett M. Kavanaugh, of sexual assault. The image is captioned: “What’s the difference??? Nothing!! Both are liars.”

The person or group behind the ads is known to Facebook, but a mystery to the public. The funding disclaimer attached to the ads reads, simply, “Paid for by a freedom loving American Citizen exercising my natural law right, protected by the 1st Amendment and protected by the 2nd Amendment.” There is no other identifying information on the page.

Facebook requires advertisers to fill out a form disclosing who paid for the ads, but the advertiser can write anything they want in the field. Hence, the “freedom loving American Citizen” you see here.

Of course, corporations’ right to pump so-called dark money into elections has been upheld by the Supreme Court. As Madrigal notes, if there’s a fix here, it can’t be Facebook’s alone:

While Facebook requires all ad sponsors to send them a government ID, so that they can be “verified,” Facebook shares literally no information about the company that paid for a given ad, aside from the name. Given that LLCs are opaque and can pop into and out of existence, there is no formal mechanism for figuring out who is pushing what agenda. Though Fletcher maintains that his funding comes from Americans, it’s easy to imagine a hypothetical in which it does not. Let’s say MotiveAI had substantial Chinese or European investors. That foreign involvement could very easily be laundered through an American starting an LLC. Even better, a thicket of LLCs that would make it more difficult to connect different purchases up.

I’ve said before that one reason I started this newsletter was to track the way that influence operations on Facebook would transform as the company began taking steps to rein them in. The hidden hands behind these ads represent the new dark money in politics, and here’s hoping we find more ways to shine sunlight on them.

Ben Kesling and Dustin Volz profile Kris Goldsmith, a vigilante Facebook moderator who hunts down scammy pages aimed at US military veterans:

Working from offices, coffee shops, and his apartment, he has cataloged and flagged to Facebook about 100 questionable pages that have millions of followers. He sits for hours and clicks links, keeping extensive notes and compiling elaborate spreadsheets on how pages are interconnected, and tracing them back, when possible, to roots in Russia, Eastern Europe or the Middle East.

It’s not enough that the country is apparently torturing and murdering real journalists; they’re also threatening jail time to any dissenters under the banner of “fake news.”

Saudi Arabia is threatening to give 5-year prison terms and heavy fines to anyone caught spreading “fake news” online, a warning to those discussing the suspected murder of Washington Post journalist Jamal Khashoggi. The threat, published over the weekend in the Saudi Gazette, echoes one of President Trump’s favorite phrases to demean any journalism that he finds unfavorable to his regime.

The Saudi Gazette cited Article 6 of the Saudi Arabia’s cybercrimes regulations which makes it against the law to breach “public order, religious values, public morals and privacy.” The law makes no distinction between Saudi citizens and foreign nationals found to be in violation of the draconian rules.

Here is just a spectacularly zeitgeisty story from Donie O'Sullivan about how people in Bangladesh are promoting fake women’s marches on Facebook to sell T-shirts. (I realize the headline makes it sound like the women are running from Bangladesh; in fact, the scam was run from there.)

The pages were not run by the Russian trolls who meddled in the US’ 2016 election, and have continued doing so since. They were run from Bangladesh, a CNN investigation has found — and they were designed to exploit Americans’ interest in politics and protests in order to sell t-shirts.

In all, there were 1,700 separate Facebook pages designed to look like they were run by local Women’s March organizers, a source familiar with Facebook’s investigation into the issue told CNN.

DORSEY: I think Twitter does contribute to filter bubbles. And I think that’s wrong of us. I think we need to fix it. But I don’t think it’s the chronological timeline or the ranked timeline that does it. I think it’s the fact that we only enable you to follow an account.

A rare and widespread outage of YouTube struck late Tuesday evening. The cause remains unknown:

As with all Google-operated services, serious downtime for YouTube is pretty rare. YouTube TV did suffer service interruption at an inopportune time during this summer’s World Cup, however, and channel pages went down for a while in April. Perhaps most infamously, Pakistan’s government accidentally caused an hours-long global YouTube blackout a decade ago by attempting to censor a trailer for an anti-Islamic film.

A security bug that hit Tumblr’s recommended blogs module may have exposed users’ private information, according to an open letter. Information like email addresses, passwords, IP addresses, and self-reported locations may have become exposed due to the bug if individual accounts were hit.

It’s unclear if the bug affected individual accounts, according to the open letter, but an investigation concluded that the bug “was rarely present.”

Before Logan Paul filmed a dead man and uploaded it to YouTube earlier this year, he was set to film a movie with the spectacular title of The Thinning: New World Order. It was off, but now it’s back on, reports Julia Alexander:

Paul posted a trailer for the film on YouTube last night. There’s no release date for the YouTube Premium (formerly YouTube Red) project yet, but Paul suggests in the comments that it’ll be available for Premium subscribers “very soon.” Paul also tweeted out the news with a devil emoji, announcing the “surprise” move.

Huffman has hinted previously that an IPO could be in Reddit’s future. But now, “we look at our peers and we look at what’s going on with the market – I want to make sure if we do something like that, that we can continue to maintain the courage of our convictions,” said Huffman, who is better known as “spez” on the website he co-founded in 2005 with Alexis Ohanian.

Brian X. Chen compares the data that Google and Facebook have collected on him:

Whenever I was perturbed by parts of my Google data, like a record of the Android apps I had opened over the past several years, I was relieved to find out I could delete the data. In contrast, when I downloaded my Facebook data, I found that a lot of what I saw could not be purged.

Twitter is bringing some much-needed clarity to its reporting process, Nick Statt reports:

In an update outlined this morning on the company’s blog, Twitter will now clearly highlight when a reported tweet had an enforcement action taken against it. The goal, writes Twitter product manager Sam Toizer, is to help the public understand when a rule-breaking tweet was forcibly taken down and not just simply deleted by the user due to backlash.

One of my favorite podcasts, Why’d You Push That Button?, is back for a third season, and the premiere episode has a timely topic for us: should you delete your tweets? Max Read and Brianna Wu guest star.

Cristina Tardáguila, Fabrício Benevenuto and Pablo Ortellado are the authors of a new report about misinformation in Brazil. They offer some practical steps WhatsApp could take to reduce the viral spread of false news:

Restrict forwards. This year, after the dissemination of rumors on WhatsApp provoked lynchings in India, the company put restrictions on the number of times that a message could be forwarded. Globally, the number of forwards was reduced to 20, while in India it was reduced to five. WhatsApp should adopt the same measure in Brazil to limit the reach of disinformation.

Restrict broadcasts. WhatsApp allows every user to send a single message to up to 256 contacts at once. This means that a small, coordinated group can easily conduct a large-scale disinformation campaign. This could be prevented by limiting the number of contacts to whom a user could broadcast a message.

It’s difficult to determine how and where companies like Facebook went wrong, writes Josephine Wolff, which makes regulation challenging:

At first glance, the lack of consequences that companies face for data breaches might seem to be a clear problem and something that can be easily remedied through heavy regulation like the European Union’s General Data Protection Regulation. However, the problem turns out to be more complicated than that. Two challenges, in particular, have hindered effective legal and regulatory responses to breaches: determining whether a company was negligent in its security practices, and figuring out how to calculate the monetary value of stolen personal information and the harms inflicted on the people whose data was breached.

The fact that your personal information was stolen from a company does not necessarily mean that the company did a poor job of securing your data and therefore deserves to be punished. The Facebook breach, for example, was made possible by three software vulnerabilities tied to user tools for privacy and for uploading birthday videos. These vulnerabilities might seem like problems that Facebook should have caught early on, but the truth is that every company has bugs like these in its software.