It Comes Naturally

Real Intelligence Can Never Be Matched by the Artificial

In Salvo 46, we looked at the artificial intelligence doomsdays prophesied by media magnates like Stephen Hawking (1942–2018) and Elon Musk ("more dangerous than nukes").1 Media do not, as a rule, examine such claims carefully. There is a market for them, after all; why damage the brand? Debunking thus falls to comparatively obscure sources.

But what a land of opportunity awaits! The AI industry faces major, open, unsolved problems with the dream of replicating human intelligence. Some insights from ID thinkers and sympathizers can help us unpack the breathless claims.

First, what is intelligence? Intelligence enables us to know things. But what does it mean to "know" something? We know things in the sense that our "selves" are aware of them. When we talk about knowledge, we assume a "knower," a self to which the information is apparent. Absent a self as the subject of the experience of knowing, knowledge—in the sense in which we usually use the word—does not exist. As neurosurgeon Michael Egnor says, "Your computer doesn't know a binary string [of code] from a ham sandwich. . . . Your cell phone doesn't know what you said to your girlfriend this morning."2

Could a very sophisticated machine develop such a self? That's a tough question, though not for the reason some would think. Our sense of a self is bound up with our consciousness. But there is no generally accepted theory of consciousness.

A Messed-Up Field

There are theories, yes. For example, Darwinian philosopher Daniel Dennett argues that consciousness is an illusion that perpetuates our species.3 Many naturalists (materialists) don't find his answer viable. But what are their options? Cognitive computing professor Mark Bishop outlines a dilemma in New Scientist: If we can develop a sophisticated machine that experiences consciousness, "then an infinitude of consciousnesses must be everywhere: in the cup of tea I am drinking, in the seat that I am sitting on."4 Bishop presents this as an impossible view, but serious thinkers, not cranks, do attempt to resolve the issue in precisely this way, by claiming that everything, even inanimate objects, is conscious to some extent (panpsychism).5

Apart from untenable theories, the field as a whole is a mess. In June 2018, Tom Bartlett attended an academic conference on consciousness on behalf of The Chronicle of Higher Education and summarized it by asking, "Is this the world's most bizarre scholarly meeting?"6 If that is a key question, we can infer that consciousness studies will shed little light on artificial consciousness. We are left with no way to define what we are even talking about, let alone aim for it.

Outside the Range

Software programmer Brendan Dixon of the Bio­Logic Institute illustrates some of the limitations created by the absence of an actual self: Computers outstrip humans in games like chess and Go, he notes, because these games are "perfect information" environments with inviolable rules. Programmers can take into account everything that could happen. But what about language? "Language is anything but a 'perfect information' environment. Words are vague. Meaning is unclear. (And arguments thus ensue.)"7 That is a strength of human language, not a weakness, because it helps us capture reality on the fly. But it is simply outside the range of what computer algorithms can do.

One recent project is MIT's Moral Machine, aimed at getting self-driving cars to make correct ethical decisions. About that, Dixon (who has played some rounds) says,

To solve problems in a computer requires that we encode the problem into terms the computer can manage. That means reducing the shades of grey into a selection of discrete choices. MIT's researchers replaced those shades of grey with a binary choice between gruesome outcomes. . . . But, in moral choices, it is the immediate circumstances, the thousand little details we handle holistically with our minds, that do matter.8

We may not know what consciousness is, but we can be pretty sure that it is not a series of 1 or 0 gates—and that the reverse is also true.

Searching for the Homunculus

Or is it? One end run around consciousness and its artifacts is to insist that the human brain is itself a highly sophisticated computer, differing from its current artifacts only in complexity. Thus, once we understand how our brains work, we can build computers that do likewise. To that notion, mathematician David Berlinski responds,

It is hardly beyond dispute that the human brain is a computer, except on the level of generality under which the human brain is like a weather system. . . . It is possible to embed the rules of recursive arithmetic in a computer, but how might embedding take place in the brain? If this question has no settled answer, then neither does the question of whether the brain is a computer.9

The standby response, "We just need more research!", is not much help if we don't know what we are trying to research.

Bill Dembski likens the search for a machine self that would enable the machine to think creatively to the early modern search for the "homunculus," the tiny human that was once thought to animate the early embryo: "We have no precedent or idea what such a homunculus would look like. And obviously, it would do no good to see the homunculus itself as a library of AI algorithms controlled by a still more deeply embedded homunculus."10

Positive Views

Probably because ID theorists do not see AI as an emerging alien intelligence, they tend to view it positively, at least in principle. Brendan Dixon suggests,

Let's use Deep Learning machines to assist radiologists in analyzing MRIs and mammograms. Let's use self-learning machines to monitor for fraudulent transactions (to be reviewed, in turn, by humans). Let's use pattern-matching machines to ferret out possible plagiarism (to be addressed, in turn, by a human editor). Let's capture, as best we can, what others have learned and make it available for others to use. But let's stop fooling ourselves: AI is not the emergence of another intelligence.11

In reality, artificial intelligence will continue to be an extension of the intelligence of programmers. And that, precisely, is the peril. Those who have the technology can rule over those who do not via constant management and surveillance at a depth never before possible. Something of the sort is emerging in China.13 Whether it emerges here depends on which social and political philosophies prevail.

As it happens, many of our political, media, and academic elites are drifting toward philosophies that accommodate authoritarian rule. In Salvo 48, we will look at some risky trends and some strategies for preventing abuses.