On Tuesday Instagram announced its latest round of anti-bullying tools, extending its previous focus on textual comments to new computer vision algorithms allegedly able to detect “bullying” in photographs themselves. In typical fashion, Facebook announced the new features out of the blue without consultation with its broader user community and provided no detail beyond the vague statement that it was relying on “machine learning technology to proactively detect bullying in photos and their captions.” While the company offered that images would still be subject to human review prior to removal, such machine-assisted review workflows typically devolve over time into humans merely rubber stamping the algorithmic output to meet ever-increasing quota requirements and as their blind trust in the machine solidifies. What does Facebook’s new initiative tell us about how the future of the web is increasingly being blindly entrusted to opaque black boxes with no oversight and no insight into how they function?

From Silicon Valley’s early days of embracing freedom of speech at all costs to its reinvention as the party of moderation and mindful censorship, web companies have increasingly adopted the stance that they alone know what is best for society and that there is no need to listen to their billions of users or engage with them in any meaningful way. Instead, as with Instagram’s latest rollout, social media companies tend to blindside their users, announcing out of the blue with increasingly regularity massive new changes in the form of new filtering or moderating algorithms or initiatives. No detail is given beyond that “machine learning” is being used, no reassuring statistics on how much training data was used or the algorithm's accuracy rate when it entered service. Subsequent public statements are typically vague, claiming “successes” or misleadingly worded statements that are not corrected when the media reports them wrong and rarely include any kind of accuracy statistics. All requests for further detail or to permit external expert review of accuracy numbers are declined. As our online world becomes increasingly filtered, censored and "sanitized" it is simultaneously becoming increasingly opaque.

Instagram’s latest initiative is no different. After initially saying it was working on getting responses, the company ultimately did not respond to questions regarding its new tools. While its peers have recognized the critical importance of incorporating diverse training data from across the world into their machine learning models that capture the full extent of the world’s languages, geographies and cultures, Facebook did not comment when asked how many languages its tools had been trained on, how many countries it had drawn bullying examples from and how it had ensured that its algorithms were culturally aware, especially by drawing from examples from outside the US and Europe.

The company also did not respond when asked whether it would permit external expert review of its new algorithms and their accuracy rates. In the past the company has traditionally argued that doing so would assist bad actors in gaming their systems, but such arguments have little merit. Permitting a small group of known experts who would provide comment on the overall accuracy would offer little assistance to bad actors, while the very argument that even a small amount of insight would be enough to overcome the tools suggests the company believes them to be so fragile and limited that even the slightest change would render them useless. More to the point, as I’ve noted in the past, in the US our criminal justice system operates largely in the public eye and we do not argue that allowing the public to see a judge’s ruling gives future criminals insights on how to avoid a conviction.

Perhaps most importantly, the company did not comment on how it would respond to the criticism that it is using its two billion users as unwilling and unaware guinea pigs for its own societal-scale research and behavioral experiments, forcibly rolling out new tools without notice, without any insight into their function, and forcing the public to essentially beta test them in real life, with all the potential impacts of errors borne out by the public. In essence, social platforms today privatize the monetary rewards of successful tools, while socializing the very real human impact of algorithms and platforms run awry.

Why does this matter? One could reasonably argue that anything that combats bullying is a good thing and that even if Facebook’s new algorithms are imperfect, they should be welcomed with open arms if they can eliminate even a sliver of the horrific toxicity of today’s online world.

The problem is that without knowing even the most basic details or accuracy statistics about Facebook’s new tools, we have no idea if they are limited to just one or two languages and may have an accuracy rate of just 1%. Without any of these details, all we have is a company’s promise to once again trust it and that it is using today’s vaunted buzzword “machine learning” to fix society’s ills.

Even if the accuracy rate is just 1% does that really matter? It matters because by announcing its new initiative, Facebook has in essence bought itself breathing space from policymakers, civil society groups and consumers that have increasingly been pressuring it to do something. By doing “something” it can now claim to be combating the problem and that government regulation, external advocacy or parental intervention are not needed. By refusing to allow external review of its algorithms or release even the most basic of accuracy numbers, the company can claim to be fighting the problem without actually doing anything that meaningfully turns the tide.

In short, by deploying black box solutions that they refuse to allow be inspected and refuse to publish even the most basic of accuracy statistics for, social media companies are able to claim to be solving every problem in existence, even if those tools are mere placebos that do nothing or, worse, if those tools actually increase the problem or create new ones.

Moreover, the more silent filtering social media companies perform on their platforms, the more at risk they are of hidden artifacts, like job hiring algorithms that unintentionally penalize women or governments exploiting the new tools to demand quiet censorship of their critics and legitimate discourse.

Bullying is far too important of a societal issue to address in secrecy and shadows. We need real solutions that are developed in the open and work globally, with external attestations of their accuracy and public communication about their tradeoffs.

Putting this all together, from their early roots as absolute defenders of free speech, social media platforms today have wholeheartedly embraced black box censorship and filtering without any accountability or visibility into how what we see and say are being profoundly reshaped every day.

Based in Washington, DC, I founded my first internet startup the year after the Mosaic web browser debuted, while still in eighth grade, and have spent the last 20 years working to reimagine how we use data to understand the world around us at scales and in ways never before...