Stay on target

Sterilizing a global social network is not a one-size-fits-all operation: Content moderators work round-the-clock to assess and remove (as necessary) hate speech, nudity, and violence.

That’s a lot of pressure for anyone—unless you’re an emotionless robot, which is exactly what Facebook is using to spot problematic posts.

At this week’s F8 developers conference in San Jose, Calif., the company revealed for the first time how it employs artificial intelligence, machine learning, and computer vision to proactively purge the site.

“It’s taken time to develop this software—and we’re constantly pushing to improve it,” Guy Rosen, vice president of product management, wrote in an accompanying blog post.

The process, however, is not exactly cut-and-dry: Humans still better understand context, and can determine whether a statement is being shared to spread hate or raise awareness about it.

For now, Facebook’s AI is detecting possible policy violations (in English and Portuguese), then passing them along to mortals for review.

“When I talk about technology like artificial intelligence, computer vision, or machine learning, people often ask why we’re not making progress more quickly. And it’s a good question,” Rosen said.

After all, CEO Mark Zuckerberg launched Facebook from his dorm room when he was 20. Shouldn’t the entrepreneur be able to pump out some high-tech code to fix everyone’s problems?

Well, no.

“We are still years away from [artificial intelligence] being effective for all kinds of bad content, because context is so important. That’s why we have people still reviewing reports,” Rosen explained. “And more generally, the technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported.”

Facebook has even reached into the Instagram cookie jar: As reported by TechCrunch, the firm used billions of public photos (annotated by users with hashtags) to train its image recognition models.

The company, still reeling from the recent Cambridge Analytica scandal, is starting to open up about its in-house tools and new transparency goals.

“Reports that come from people who use Facebook are so important—so please keep them coming,” Rosen said. “Because by working together we can help make Facebook safer for everyone.”