Footnote: A.I. in the Age of Post-truth

Donald Trump, appointed by the electoral college in November 2016, will probably always be remembered as the president of "fake news" and "post-truths". His campaign, the media that supported him (e.g. the mainstream television channel Fox News), and incognito Russian hackers, artfully manipulated facts to create an "alternative reality" (to quote one of his associates). What was unique about his election was not that fake news had such a great influence (they did) but that the platform that was most responsible for spreading them, Facebook, denied their effect. In fact, very few people were using the neologism "fake news" until the day (10 November 2016) when Facebook's founder Mark Zuckerberg made fun (at the Techonomy Conference) of the notion that fakes news spread on Facebook could be so influential. They were, as all "fake news" have been before.

Political propaganda has been around since the times of the Athenian democracy. The Soviet Union mastered the art of propaganda, creating "fake news" about both its Western enemies and its own conditions (ironically, the main organ of "fake news" was called Pravda, which means "the truth"). Political propaganda was largely funded and controlled by governments until the advent of mass media. Mass media enabled more organizations to spread fake news. For example, political candidates in democratic countries have always relied on fake news to smear the opponents, and mass media turned smearing campaign into somewhat scientific endeavours.

In the USA, the power of mass media in spreading "fake news" was demonstrated by two events that predate the Internet by several decades. In October 1938 actor and filmmaker Orson Welles used the radio to stage a fake invasion of aliens that spread panic around the country ("The War of the Worlds"). In February 1950, speaking to a women's club in West Virginia, senator Joseph McCarthy claimed to be in possession of a list of communist spies. In the next two years newspaper articles and radio programs convinced the public that the threat of communist infiltration was real, and by the 1952 elections the "Red Scare" had become the main topic of debate, with each candidate trying to outdo the other in anti-communism fervor. The New York Times of 27 August 1952 ran three front-page stories about the suspected communist infiltration.
Mark Lane's book "Rush to Judgment" (1966) started the conspiracy theories about the assassination of president John Kennedy. Erich von Daniken's book "Chariots of the Gods" (1968) launched an entire genre of nonfiction, devoted to documenting how aliens built the ancient monuments of Egypt, England and the Easter Islands. Peter Hyams' film "Capricorn One" (1978) involuntarily started the conspiracy theory that the USA never went to the Moon. Independent television channels and later websites helped spread the conspiracy theories about the "seven sisters" of the oil business and the "Illuminati" who control the whole world. Michael Moore's documentary "Fahrenheit 9/11" (2004) was a typical specimen of fake news about the Middle Eastern wars of the USA.

In 2015 Paul Horner, a specialist of viral news hoaxes, whom the Washington Post called "impresario of a Facebook fake-news empire", got help from a new powerful ally: the algorithms employed by the most popular search engine and social-networking platform. Because people love hatred, his stories became very popular on both platforms, and both platforms helped to spead them. (To be fair, Horner also got help from the mainstream television channel Fox News, that lent credibility to some of his most ridiculous concoctions).

A study by Soroush Vosoughi, Deb Roy and Sinan Aral at MIT, "The Spread of True and False News Online" (published in 2018), showed that people are much more likely to be intrigued (and to re-broadcast) fake news than real stories (they analyzed ten years of tweets on Twitter), with the result that lies spread much faster than the truth. In the human world, falsehood beats truth hands down.

In 2017 the strange case of Lani Sarem's book "Handbook for Mortals" rocked the publishing world. The New York Times declared it the number-one bestseller in the country but nobody had heard of it before, and there seemed to be no chat on social media about it. Eventually the suspicion arose that the author (or someone on her behalf) had simply purchased lots of copies of the book from the most influential bookstores, thereby twisting the process to this author's advantage. (The New York Times promptly dumped it from the bestseller list).

Horner and Sarem had taken advantage of the stupidity of algorithms. In both cases it was not difficult to fool algorithms that have no common sense and no idea of what the consequences of their calculations could be.

It will not be difficult to revise the New York Times algorithm to check whether the book is being discussed on social media, and, if discussion matches sales, then accept the numbers; but then, again, it will equally not be difficult for authors and their agents to create two, three, ten, one hundred bots that will impersonate people on social media and will start fake discussions about the book. Who wins?

It is easy for a human being to fool algorithms that have no sense of the world of humans. It is difficult for engineers to make these algorithms impossible to manipulate if the algorithms have no sense of the world of humans.

In fact, algorithms can even generate "fake news" without any human intervention, just by accident. Dumb algorithms are prone to dumb conclusions. As an example of indirect "fake news", today (28 August 2017) i just typed "fake news" in the most popular search engine and the top results are two pictures of Mark Zuckerberg, founder of Facebook, instead of the picture of the main producers of fake news. The caption says "Facebook now blocks ads from pages that spread fake news" but the association created by this page is that "fake news" means "Mark Zuckerberg", not Donald Trump or Fox News.

In fact, before being a producer of fake news, Trump is an avid consumer of fake news: in many cases he may be the victim before he becomes the perpetrator.

How did we find out that the Sarem affair was a scam? People. People figured out that it is unusual for a bestseller not to be the subject of intense social-media discussion. People figured out that the New York Times uses a selected number of bookstores to calculate the bestselling books. People figured out that an author could conceivably buy 100 copies of her own book from each of these selected bookstores. People figured out that this is a distortion of the meaning of "best seller": if i buy 100,000 copies of my own book, my book is NOT a bestseller.

If the benefits that social-media platforms obtain from "fake news" is big enough, it may be humanly impossible to disincentivize their (increasingly "intelligent") algorithms from helping the cheaters. And the number of cheaters (both humans and increasingly "intelligent" bots) will be proportional to the benefits of producing fake news.

Reality has always been a game-theoretic problem: there is truth and there are adversaries of truth. But discovering truth is more than just a game between the fact-checkers and the fakers. A "bot" of truth would simply look for consensus, i.e. to track on which bases a statement was formulated, but that may simply worsen the problem of "echo chambers".

Besides the passion for spectacles of hatred, humans also have a passion for creating "echo chambers", i.e. for listening only to the news that they want to hear. That's why Aristotle and Galen ruled unchallenged for centuries. Who dared to argue with their wisdom, certified by all wise people? Luckily, humans have another unique "evil" skill: to disagree all the time with each other. Sooner or later (when the risk of being burned at the stakes has decreased enough) one of those experts rises to challenge the master.

It is easy to write a program that checks whether a statement is based on previously proven statements, i.e. that is consistent with the consensus. But how easy is it to build a program that challenges the consensus? If everybody believes that the Earth is flat, how easy is it to build a program capable of discovering that everybody is wrong? I still have more faith in evil humans when it comes to discovering the truth. There is a real chance that "intelligent" algorithms will simply plunge us deeper and deeper into post-truth chaos.

"The size of the lie is a definite factor in causing it to be believed" (Adolf Hitler in “Mein Kampf”)

(Disclaimer: i have no political affiliation and i was certainly not a supporter of Trump's opponent).