AI Dangers That Are Not Just Fake News

Recently, we had a look at a claim by an Elon Musk-linked organization that claimed to have an AI news writer that was so good it could not be released. Developers feared that it would be a boost to the fake news industry.

Just how seriously to take such a threat is unclear for several reasons. As Robert J. Marks noted, the automated composition of nonsense computer science papers has been around since 2012. Sports and business publications, which lend themselves to automated reporting of results, have been providing credible information that way for years.

Besides, “fake news” is a highly disputable concept. Very often, it just means news that is not favorable to a given party or clickbait that sells advertising. Proposals to do something about it would typically raise First Amendment (freedom of the press) issues in the United States.

The language model can write like a human, but it doesn’t have a clue what it’s saying… The passages of text that the model produces are good enough to masquerade as something human-written. But this ability should not be confused with a genuine understanding of language — the ultimate goal of the subfield of AI known as natural-language processing (NLP). (There’s an analogue in computer vision: an algorithm can synthesize highly realistic images without any true visual comprehension.) In fact, getting machines to that level of understanding is a task that has largely eluded NLP researchers. That goal could take years, even decades, to achieve, surmises Liang, and is likely to involve techniques that don’t yet exist.

The techniques certainly don’t yet exist. No one has put forward any idea how to get a program to “have a clue what it’s saying.” That takes us into the realm of science fiction (Zoe comes to mind).

The reason the OpenAI model “writes like a human” is the sheer number of examples fed to it. Thus, its tale of the evolutionary biologist who finds “Ovid’s unicorn” could indeed sound like it came from a popular science mag. Thousands of such stories out there form a template into which to spill facts. The use of templates is a staple of writing instruction that long predates artificial intelligence. A program can sift through much more existing material much faster than a human, of course, but don’t look there for new information or ideas.

While it’s unclear that the automated news spew is a serious threat, there are a number of real AI dangers we should be aware of, especially constant surveillance and data gathering:

It is now possible to track and analyze an individual’s every move online as well as when they are going about their daily business. Cameras are nearly everywhere, and facial recognition algorithms know who you are. In fact, this is the type of information that is going to power China’s social credit system that is expected to give every one of its 1.4 billion citizens a personal score based on how they behave—things such as do they jaywalk, do they smoke in non-smoking areas and how much time they spend playing video games. When Big Brother is watching you and then making decisions based on that intel, it’s not only an invasion of privacy it can quickly turn to social oppression. Bernard Marr, “Is Artificial Intelligence Dangerous? 6 AI Risks Everyone Should Know About” at Forbes

The Machine can know a great deal about us if we heedlessly generate a mass of information that can be is gathered, stored, and used for purposes of which we might have no idea. Does your phone know everything now? At present, you can shut the phone out of much of the loop if you take steps to do so.

Because so much surveillance is now possible, the new AI technologies that promised freedom are becoming a threat to it. It’s well known that the Chinese government is using AI to monitor and police the daily lives of citizens. What’s less well-known is that China is exporting the technology for mass surveillance to developing countries as foreign aid. Or that Canada recently demanded intimate banking data from half a million citizens. It’s probably quite easy for governments to convince themselves that they could solve a lot more of everyone’s problems if they just knew what we are all doing all the time.

Quite apart from issues of power and control, data is an asset, one that can make those who market it wealthy and powerful. Your medical data, for example, is valuable but you don’t own it and it may be part of the immense and growing medical data market. For that matter, Google is collecting data on schoolkids in the United States, in the guise of helping schools with free technology. Free? It’s still true that nothing in the world is free.

It’s also true that the unstoppable Machine is science fiction. The problem with AI technology is not that it will do our thinking for us but rather that those who control it will try to. That’s happened many times before: Those who control a new communications technology typically have a great advantage over those who don’t—whether that technology is an alphabet, an abacus, a printing press, a telegraph, a telephone, or a software communications program. Just staying informed becomes a road to freedom.

See also: Who’s afraid of AI that can write the news? AI now automates formula news in business and sports. How far can it go?

Mind Matters features original news and analysis at the intersection of artificial and natural intelligence. Through articles and podcasts, it explores issues, challenges, and controversies relating to human and artificial intelligence from a perspective that values the unique capabilities of human beings. Mind Matters is published by the Walter Bradley Center for Natural and Artificial Intelligence.