Q&A: How Facebook regulates the wild west of political ads

NEW YORK — With less than three months to go before the midterm elections, Facebook is enforcing strict new requirements on digital political ads. Among other things, they force political ad buyers to verify their identities by receiving mail at a known U.S. address.

Facebook credits the system with catching at least one ad from a fake, and possibly Russia-connected, Facebook page that it discovered trying to sow political discord ahead of the U.S. midterm elections.

But how does the system enforce these rules? How does it even define a political ad, particularly when a computer is making that judgment? And did any ads evade Facebook’s detection before it discovered a new set of fake and possibly Russia-linked pages this week?

Here are some questions and answers on how political ads work on Facebook.

What was the problem with political ads?

After revelations that that Russians bankrolled thousands of fake political ads during the 2016 elections, Facebook and other social networks faced intense pressure to ensure that doesn’t happen again.

Unlike political ads on television, online ads are not required to disclose who pays for them, making it harder to evaluate their message. And it makes it much easier for people to disguise their true motives when buying political ads.

What did Facebook do?

Last fall, Facebook announced that it will verify political ad buyers in federal elections by requiring them to confirm their names and locations. Political ads must also carry a “paid for by” disclosure. Facebook will also archive all political ads for the public, including the details of how they were targeted.

The company defines a political ad as any advertisement related to U.S. elections, such as those referencing current or former candidates, political parties, political action committees or ballot measures. It even includes “get out the vote” drives.

In May, the company expanded the requirements to cover U.S. ads that touch on polarizing issues such as gun control and abortion rights. But defining what counts as an issue ad isn’t always easy. For example, both education and immigration can be political issues — but ads for universities or immigration lawyers generally are not.

So Facebook produced a list of “top-level issues” that, if mentioned in an ad, subject it to closer scrutiny. These topics range from specifics like taxes and terrorism to broad issues such as health, poverty and “values” (which Facebook does not define).

How does Facebook enforce this?

Anyone can try to buy a political ad on Facebook; it’s up to the company to enforce its rules.

After an ad is submitted through Facebook’s automated system, the company reviews its images, text, the audiences it targets, and the Facebook page it aims to promote. (In a political context, that page might extol a candidate, blast a disfavored policy or solicit donations.)

For example, Facebook says it looks at ads as well as their landing pages to see if they mention current or former political candidates, in which case the ads are flagged as political and require additional verification to run.

Such reviews are carried out by both humans and automated systems. Facebook declined to explain how they divide up that work.

Are there additional rules?

Yes. Facebook also requires that a political ad buyer must be an “administrator” on the page promoted by the ad. So if Alia Upright is running for Congress, no one can run an ad promoting Upright’s Facebook page unless they’re listed as an administrator.

In the case of political and issue ads, ad buyers must also verify their identity and U.S. mailing address. That starts with submitting a government-issued ID and the last four digits of their social security number. Once that’s verified, Facebook will mail a postcard to their address with a special code to be entered online.

Can people cheat?

Facebook says its systems are working. While the fake pages it disclosed Tuesday spent about $11,000 to run roughly 150 ads, it says most of these ads ran before its new rules were in place. Facebook says that one fake page, called Resisters, attempted to run an ad after the rules went into effect but was denied by its system.

Political consultant Beth Becker, who owns Becker Digital Strategies, welcomes the added accountability, but says the new rules don’t actually fix the problems they address. For instance, she notes, there’s nothing to stop unscrupulous but verified people from serving as “cutouts” to run ads for others who don’t meet Facebook’s requirements.

“I think these are band-aids that look pretty to people don’t know the systems or understand the social media ecosystem,” Becker said.

What else can go wrong?

Sometimes Facebook’s systems misfire and identify clearly non-political ads as political — for instance, when it took down ads for Bush’s baked beans because it contained the word “Bush.” Media organizations have also had their ads flagged when they promoted news stories about political candidates or important issues.

Such problems can present huge issues for fast-moving political campaigns, many of which depend on Facebook advertising. If ads or the pages that sponsor them are suddenly flagged at crucial times in a close election — say, the week absentee ballots are mailed out — “that would be devastating,” says Matt Shupe, a GOP consultant.