Does Facebook Know What a 'Hard Question' Is?

In June of last year, Facebook announced its latest effort to hold itself accountable and be more transparent. Called “Hard Questions,” the series of blog posts are intended to illuminate users about where the social network stands on some of the thornier issues facing the social network. But how hard can a question really be when you get to both hand-select it and take as much time as you’d like to formulate a response? The answer to that question is: not all that hard.

At face value, some of the Hard Questions posed by Facebook over the last several months do appear to tackle some contentious subjects. Facebook’s first post of the series—“How We Counter Terrorism”—first outlined the ways in which the social network uses artificial intelligence to fight terrorism on the platform. While informative, detailing how the company identifies dangerous content is arguably not hard—it’s just transparent.

And rarely do they get ahead of a scandal. Most of them are penned conveniently following some sort of public backlash. “So Your Kids Are Online, But Will They Be Alright?” was posted a few weeks after former Facebook president Sean Parker voiced his concerns about the consequences of the social network he himself helped build. “God only knows what it’s doing to our children’s brains,” Parker told an audience at an Axios event. “Is Spending Time on Social Media Bad for Us?” was posted a few days after Facebook’s former vice president of user growth, Chamath Palihapitiya, made headlines for telling a Stanford audience, “I think we have created tools that are ripping apart the social fabric of how society works.” The last four Hard Questions, published between March 21st and Monday, all address concerns over invasions of privacy, specifically with regards to user data and the manipulation of users during elections—that is, the leading offences plaguing Facebook amid the Cambridge Analytica scandal and Mark Zuckerberg’s congressional hearings.

I’m not arguing that these posts are not valuable or that they are inherently bad. Any transparency and culpability is meaningful, especially from a company that has failed so impressively at protecting the privacy of its users. But it’s laughable to characterise these posts as “hard questions” when so many of them are rolled out to absolve the company of poor behaviour already making headlines.

Here are some truly hard questions we’d like some answers to: When was the first time Mark Zuckerberg had a hunch his platform was being used for foreign election interference? For psychological manipulation? If there was hard evidence suggesting the world would be a better place without Facebook, would you shutter the service? Also, why dry toast?