Science Fiction author Charlie Stross offers three arguments against the Singularity, which are worth exploring. I'd like to comment on this as well as some of the backlash against Stross's arguments. The three arguments revolve around human-level artificial intelligence, the ability to upload the mind into a computer and the ability to live in a simulation. For this post, I'll focus on human-level intelligence. Over the next two days, I'll cover the other two.

Before I begin, though, I'm going to repeat Stross' caveat:

I'm going to take it as read that you've read Vernor Vinge's essay on the coming technological singularity (1993), are familiar with Hans Moravec's concept of mind uploading, and know about Nick Bostrom's Simulation argument. If not, stop right now and read them before you continue with this piece. Otherwise you're missing out on the fertilizer in which the whole field of singularitarian SF, not to mention posthuman thought, is rooted. It's probably a good idea to also be familiar with Extropianism and to have read the posthumanism FAQ, because if you haven't you'll have missed out on the salient social point that posthumanism has a posse.

The first point that Stross makes is simple: one of the necessary steps for the Singularity to happen is the creation of human-level artificial intelligence, which he sees as unlikely.

First: super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it's unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we're likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.

I think that this is a salient point. I also think that human-level AI is unlikely for a number of reasons - more than a few of them related to the differences between biological and machine intelligence. For one thing, we're approaching the end of Moore's Law in about a decade and a half or so, and generalized quantum computing isn't likely to be with us anytime soon yet. For example, D-Wave's adiabatic quantum computer isn't a general computer - it's focused on optimization problems. But even with that, the differences between human, animal and machine intelligence are profound.

If Stross’ objections turn out to be a problem in AI development, the “workaround” is to create generally intelligent AI that doesn’t depend on primate embodiment or adaptations.

Couldn’t the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?

But Anissmov's first point here is just magical thinking. At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply "workaround" the issue misses the underlying point that we can't yet quantify the difference between human intelligence and machine intelligence. Indeed, it's become pretty clear that even human thinking and animal thinking is quite different. For example, it's clear that apes, octopii, dolphins and even parrots are, to certain degrees quite intelligent and capable of using logical reasoning to solve problems. But their intelligence is sharply different than that of humans. And I don't mean on a different level -- I mean actually different. On this point, I'd highly recommend reading Temple Grandin, who's done some brilliant work on how animals and neurotypical humans are starkly different in their perceptions of the same environment.

As to Anissmov's second point, it's definitely worth noting that computers don't play "human-level" chess. Although computers are competitive with grandmasters, they aren't truly intelligent in a general sense - they are, basically, chess-solving machines. And while they're superior at tactics, they are woefully deficient at strategy, which is why grandmasters still win against/draw against computers. Indeed, as I noted in April, Garry Kasparov's development of Advanced Chess (where humans have access to computers) demonstrates that human and machine intelligence, as applied to chess, are wonderful complements to each other. Humans lose to computers because humans, when they're distracted or tired, make short-term, tactical blunders. But computers lose to humans because they are simply no match for humans at creating long-term chess strategy. What Advanced Chess does is, in essence, allow humans to double check their moves on a computer to make sure they don't make a short term mistake. And putting them together creates absolute dominance for humans. As Kasparov wrote:

The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

As for Watson's ability to play Jeopardy!, it's important to note that while Watson did win the Jeopardy tournament he played in, it's worth noting that Watson's primary advantage was being able to beat humans to the buzzer (electric relays are faster than chemical ones). Moreover, as many who watched the tournament (such as myself) noted, Watson got worse the more abstract the questions got. For pure questions of fact, Watson was better - but so what? We already know that computers have faster memory retrieval systems than humans. But when it came to plays on words or ambiguity, Watson was terrible. And it's also worth noting that Watson got trounced by Representative Rush Holt, Jr. in an Exhibition game - one in which there were far more abstract categories than were the rounds shown on TV where Watson won.

What computers are smart at are brute-force calculations and memory retrieval. They're not nearly as good at pattern recognition or the ability to parse meaning and ambiguity, nor are they good at learning. To continue with the subject of gaming, it's worth noting that when it comes to games that are tougher to solve mathematically, computers aren't as good as humans. And when it comes to strategy games, such as Starcraft 2 or the Civilization games, long-time gamers know that the computer AI doesn't beat humans by being smarter - it beats humans by cheating: the program allows the AI, at higher levels, to do things faster than it allows the human players to. In essence, it handicaps the humans by forcing them to operate under less advantageous rules.

Now, I don't doubt that computers are going to get better and smarter in the coming decades. But there are more than a few limitations on human-level AI, not the least of which are the actual physical limitations coming with the end of Moore's Law and the simple fact that, in the realm of science, we're only just beginning to understand what intelligence, consciousness, and sentience even are, and that's going to be a fundamental limitation on artificial intelligence for a long time to come. Personally, I think that's going to be the case for centuries.

To use my tried-and true analogizing to Star Trek, I won't be surprised if we're able to develop a computer as smart as, say, the Enterprise computer within a century or so. But the Enterprise computer is a long, long way from Data.