When launching Facebook, founder Mark Zuckerberg's vision of connecting the world was ambiguous but inoffensive, another iteration of Silicon Valley's fluffy unicorns and rainbow sloganeering.

Facebook, Twitter and others need to realize that they're not just transmitters of content. They're powerful distribution outlets whose backend coding massively shapes opinions. And with that power comes a commensurate level of moral responsibility - even if they don't want it.

On the backend, Facebook was harvesting personal data on an almost unimaginable scale to power one of the most pervasive and profitable advertising networks. The privacy questions still loom large. But there are darker angles.

Asymmetric Propaganda Advantage

Facebook's systems, as well as those of other social media platforms, such as Twitter, are tuned to amplify the divisive and the controversial, no matter the substance or veracity. As we've seen, Russian spies understood this tuning and used it for maximum projection, injecting inflammatory content into public discourse during the 2016 U.S. presidential campaign (see More Indictments in Russian Election Interference Probe).

The Russian campaigns on Facebook and Twitter were tactically brilliantly, repackaging divisive issues in more inflammatory ways with the goal of suppressing voter turnout or spreading disinformation. They amplified existing anxiety, doubt and fear. Essentially, they flung our own dung back at us, and it stuck.

Unfortunately, Facebook's platform gave those actors an asymmetric advantage: The cost to place ads that appeared as native content is low. Also, users have a hard time determining what is organic content versus paid content.

A large-scale study by the University of Wisconsin-Madison of divisive ads on Facebook found that "users then are prone to share the messages that look like a regular post and thus amplify the disinformation campaign on Facebook."

Inflammatory Content Wins

But increasingly, social media is being pinned as an influence in violent actions. A new study from the University of Warwick suggests there are correlations between hate crimes directed against refugees in Germany and anti-refugee posts on Facebook.

The researchers found that hate crimes in German communities appeared to be more prevalent after an uptick in anti-refugee sentiment on Facebook. When the internet went down in certain communities for a while, the hate crimes tended to subside.

The study cautions that social media doesn't "cause crimes against refugees out of thin air." Instead, it notes, "our argument is that social media can act as a propagating mechanism for the flare-up of hateful sentiments."

Facebook, Twitter and Google have committed to eliminating the intentional manipulation of their platforms as well as eliminating bogus content. But those efforts will likely only eliminate the worst offenders.

The larger problem is one of marginal, offensive views on hot-button topics getting an outsized reward from algorithms. Those systems are intended to keep popular, peer-voted posts circulating to keep users on the platform longer to view more ads.

But the approach is giving a megaphone to those promoting what would be on a societal scale those of a shouty minority. The Germany study found that some communities that showed the most outpouring of support for resettled refugees also had online pockets of fierce anti-refugee content.

The Germany study suggests that what a user saw on Facebook in those communities did not reflect the majority, perhaps tilting those who hadn't fully formed their views.

Moral Responsibility

How can social media outlets better tune their algorithms? It's a challenging technical problem, but it would also require a willingness to forgo ad revenue that plays on the back of intentionally manipulative or offensive content.

There are also battles to be waged against crafty legitimate users who post edgy content that constantly skirts the boundaries of terms of service. As an example, Twitter struggled internally with how to handle right-wing commentator Alex Jones.

But the decisions over Jones and lesser firebrands shouldn't be difficult. Neither Twitter nor Facebook or any other company would allow a speech in their corporate headquarters that, for example, employs racist dog whistles or subtly encourages aggression against refugees.

And online, their policies should be no different.

Such censorship would raise ire, of course. Just a handful of social media outlets have become the main channels for distributing information. Drawing up guidelines for acceptable content isn't difficult, but it is hard to evenly apply them.

There's no guaranteed free speech right on a platform run by a private company. Online communities that existed long before Facebook or Twitter have applied their own somewhat arbitrary rules as to what is acceptable. If you didn't like the rules, tough luck.

I deleted my Facebook account a few months ago. It had become an endless digital river of rubbish whose signal-to-noise ratio wasn't worth the time. Because Facebook is what you make it, I was largely insulated from the bogus news, Russian-sponsored posts and other digital detritus. But I found the post-election revelations deeply disturbing, and decided I absolutely wanted no part of Facebook anymore.

Facebook, Twitter and others need to realize that they're not just transmitters of content. They're powerful distribution outlets whose backend coding massively shapes opinions. And with that power comes a commensurate level of moral responsibility - even if they don't want it.

About the Author

Kirk is a veteran journalist who has reported from more than a dozen countries. Based in Sydney, he is Managing Editor for Security and Technology for Information Security Media Group. Prior to ISMG, he worked from London and Sydney covering computer security and privacy for International Data Group. Further back, he covered military affairs from Seoul, South Korea, and general assignment news for his hometown paper in Illinois.