If Thoughts Were Data, Machines Could Write

British author and journalist Steven Poole had a go recently at the idea that machines will sponsor information terrorism. Or, he speculates, “will they just put humble midlist novelists out of business?”

He had a look recently at a supposedly safer version of the Elon Musk-linked news writing software that was allegedly too dangerous to release in its pure form. Or was that claim, as many think, just more hype?:

GPT2 is just using methods of statistical analysis, trained on huge amounts of human-written text – 40GB of web pages, in this case, that received recommendations from Reddit readers – to predict what ought to come next. This probabilistic approach is how Google Translate works, and also the method behind Gmail’s automatic replies (“OK.” “See you then.” “That’s fine!”) It can be eerily good, but it is not as intelligent as, say, a bee.

We’ll go Poole one better: It’s probably not as intelligent as an amoeba. The people who created it are intelligent, of course, but amoebas are smarter than we used to think. So are bees.

He makes another point that is often drowned out by deafening AI hype: Most such technical advances appropriate the low-paid labor of countless human beings. For example, thousands of stories online are blended to produce an algorithm that spits out copy in response to opening sentences fed to it. Steven Poole ripostes, “When a human writer commits plagiarism, that is a serious matter. But when humans get together and write a computer program that commits plagiarism, that is progress.”

This is also true of AI translation and machine vision. Because all these areas are so new, the right questions about fair distribution of social rewards have yet to be asked.

We should look at the Luddite uprising and be warned: The Luddites were not “anti-technology,” as usually portrayed. They opposed (and were broken by) a system in which new technology was downgrading the skilled tradespeople into low-wage day laborers while benefiting the wealthy and powerful. Fortunately, today’s digital revolution has not triggered bloody uprisings because most social classes have benefited from the new technologies, for example in communications and healthcare.

More generally, Poole reminds us, “Writing is not data, it is a means of expression, and a non-sentient computer program has nothing to express.” This gets us close to the heart of the matter. He fed the opening sentences of his own article on this subject into the tamer version of GPT2 and published the resulting onslaught of null bloviation that were the proposed following sentences.

The problem is stark: The “writer” is not experiencing the generation of thought and the reader certainly isn’t contemplating it. The fact that creativity does not follow computational rules may well be a ceiling for machine writing and it is not made of glass.

Poole suggests that GPT2 has a future in the routine generation of first drafts of boilerplate reports, as a sort of glorified template. George Orwell (1903–1950), who thought that machines could write trashy novels to distract the subjects of a totalitarian state might agree (though he had little to fear where his own work was concerned, as Poole shows).

Mind Matters features original news and analysis at the intersection of artificial and natural intelligence. Through articles and podcasts, it explores issues, challenges, and controversies relating to human and artificial intelligence from a perspective that values the unique capabilities of human beings. Mind Matters is published by the Walter Bradley Center for Natural and Artificial Intelligence.