HOW TECHNOLOGY DISRUPTED TRUTH: PART 2

Recently, the British newspaper The Guardian ran a long story on the impact technology is having on our perception of truth.

It was a thoughtful, informative piece that pointed out how social media are not only disrupting but corrupting journalistic integrity and honesty world-wide. We are seeing that happen in the United States with an unrestricted and uninhibited social media brimming with fabrications, smears, deceptions, and outright lies. Regrettably, those stories are posted and forwarded to millions of people as “truth.”

The story was reported and written by Katharine Viner, editor-in-chief of Guardian News & Media. Because it was such a long story, I am running it on my blog in five parts. I have opted to leave British spelling and style intact.

Here is Part 2.

The diminishing status of truth

By Katherine Viner

Twenty-five years after the first website went online, it is clear that we are living through a period of dizzying transition. For 500 years after Gutenberg, the dominant form of information was the printed page: knowledge was primarily delivered in a fixed format, one that encouraged readers to believe in stable and settled truths.

Now, we are caught in a series of confusing battles between opposing forces: between truth and falsehood, fact and rumour, kindness and cruelty; between the few and the many, the connected and the alienated; between the open platform of the web as its architects envisioned it and the gated enclosures of Facebook and other social networks; between an informed public and a misguided mob.

What is common to these struggles – and what makes their resolution an urgent matter – is that they all involve the diminishing status of truth. This does not mean that there aren’t any truths. It simply means, as this year has made very clear, that we cannot agree on what those truths are, and when there is no consensus about the truth and no way to achieve it, chaos soon follows.

Increasingly, what counts as a fact is merely a view that someone feels to be true – and technology has made it very easy for these “facts” to circulate with a speed and reach that was unimaginable in the Gutenberg era (or even a decade ago).

A dubious story about Cameron and a pig appears in a tabloid one morning, and by noon, it has flown around the world on social media and turned up in trusted news sources everywhere. This may seem like a small matter, but its consequences are enormous.

In the digital age, it is easier than ever to publish false information,

which is quickly shared and taken to be true

“The Truth,” as Peter Chippindale and Chris Horrie wrote in Stick It Up Your Punter!, their history of the Sun newspaper, is a “bald statement which every newspaper prints at its peril.” There are usually several conflicting truths on any given subject, but in the era of the printing press, words on a page nailed things down, whether they turned out to be true or not. The information felt like the truth, at least until the next day brought another update or a correction, and we all shared a common set of facts.

This settled “truth” was usually handed down from above: an established truth, often fixed in place by an establishment. This arrangement was not without flaws: too much of the press often exhibited a bias towards the status quo and deference to authority, and it was prohibitively difficult for ordinary people to challenge the power of the press.

Now, people distrust much of what is presented as fact – particularly if the facts in question are uncomfortable, or out of sync with their views – and while some of that distrust is misplaced, some of it is not.

In the digital age, it is easier than ever to publish false information, which is quickly shared and taken to be true – as we often see in emergency situations, when news is breaking in real time. To pick one example among many, during the November 2015 Paris terror attacks, rumours quickly spread on social media that the Louvre and Pompidou Centre had been hit and that François Hollande had suffered a stroke. Trusted news organisations are needed to debunk such tall tales.

Sometimes rumours like these spread out of panic, sometimes out of malice, and sometimes deliberate manipulation, in which a corporation or regime pays people to convey their message. Whatever the motive, falsehoods and facts now spread the same way, through what academics call an “information cascade.”

As the legal scholar and online harassment expert, Danielle Citron describes it, “people forward on what others think, even if the information is false, misleading or incomplete because they think they have learned something valuable.”

This cycle repeats itself, and before you know it, the cascade has unstoppable momentum. You share a friend’s post on Facebook, perhaps to show kinship or agreement or that you’re “in the know,” and thus you increase the visibility of their post to others.

Here is the news – but only if Facebook thinks you need to know Prof. John Naughton, Open University

Algorithms such as the one that powers Facebook’s news feed are designed to give us more of what they think we want – which means that the version of the world we encounter every day in our personal stream has been invisibly curated to reinforce our pre-existing beliefs.

When Eli Pariser, the co-founder of Upworthy, coined the term “filter bubble” in 2011, he was talking about how the personalised web – and in particular Google’s personalised search function, which means that no two people’s Google searches are the same – means that we are less likely to be exposed to information that challenges us or broadens our worldview, and less likely to encounter facts that disprove false information that others have shared.

Pariser’s plea, at the time, was that those running social media platforms should ensure that “their algorithms prioritise countervailing views and news that’s important, not just the stuff that’s most popular or most self-validating.” But in less than five years, thanks to the incredible power of a few social platforms, the filter bubble that Pariser described has become much more extreme.

On the day after the EU referendum, in a Facebook post, the British internet activist, and mySociety founder, Tom Steinberg, provided a vivid illustration of the power of the filter bubble – and the serious civic consequences for a world where information flows primarily through social networks:

“I am actively searching through Facebook for people celebrating the Brexit leave victory, but the filter bubble is SO strong, and extends SO far into things like Facebook’s custom search that I can’t find anyone who is happy *despite the fact that over half the country is clearly jubilant today* and despite the fact that I’m *actively* looking to hear what they are saying.

“This echo-chamber problem is now SO severe and SO chronic that I can only beg any friends I have who actually work for Facebook and other major social media and technology to urgently tell their leaders that to not act on this problem now is tantamount to actively supporting and funding the tearing apart of the fabric of our societies … We’re getting countries where one-half just doesn’t know anything at all about the other.”

But asking technology companies to “do something” about the filter bubble presumes that this is a problem that can be easily fixed – rather than one baked into the very idea of social networks that are designed to give you what you and your friends want to see.