Regarding the Singularity

A recent set of articles in the New York Times and elsewhere, including the Kurzweil book, prompted a friend to ask me for my thoughts on the Singularity Movement. Here is an excerpt of the email I wrote:

Regarding the Singularity Movement, I think economic arguments such as that presented by Robin Hanson in IEEE Spectrum (2008) carry more weight than the gushing futurist predictions from the likes of Ray Kurzweil. In the Spectrum article Hanson cites two previous singularities — the agricultural and industrial revolutions — and suggests that a revolution in machine intelligence is leading to a third that will take shape over the next half-century.

I tend to take most of what futurists say with a grain of salt, because they rely on a belief/assumption/confidence that the introduction of disruptive technologies into a society yields predictable results — for good or bad — which never happens. The combination of factors including technologies being human constructions, the fact that we as humans never make completely rational decisions, and the fact that all of this takes place within a fundamentally chaotic, only approximately predictable context, means that we simply cannot know what will happen in the future!

Here’s what I know: We humans are wired to build and use tools and, to the extent possible, adapt to the environments we build — or die trying. Google, while amazing, is still a tool; an engineered system that (given enough time) I can explain to you. Ironically enough, the reason Google works so well is because it’s actually based on simpler, but more fundamental principals than the systems which preceded it, closer to how naturally-occurring networks emerge and function. But the way Google has been adopted and applied in the “ecosystem,” while making sense in hindsight, could not have been predicted.

I’m currently reading Jonah Lehrer’s How We Decide, a wonderful exploration of the biochemistry of how we make decisions. Any such discussion naturally much touch on how various imbalances (e.g. dopamine, etc) effect that process, and how well-intentioned efforts by doctors to counteract certain imbalances leads to very unexpected and usually undesired results.

Lehrer’s book makes it profoundly clear that we never know for certain what will happen when we diddle with the decision-making processes in our brain, whether it involves extending the lower levels of the nervous system (the sensory level) or the higher level processes. Researchers do know that we seem to adapt well to lower-level, e.g. neural prosthetics, but each higher-level process involves a synaptic algorithm that we don’t completely understand — mostly because our brain is a distributed system, not a single “algorithm,” whose “result” is emergent.

That ultimately is my point: our brains are distributed systems that exhibit adaptive and unpredictable behaviors, and we can’t begin to understand what will happen when we explore higher-level prosthetics based on “intelligent machines.” Something will happen, but there is no reason to believe it will lead to either a Utopian or Dystopian existence any more than the agricultural or industrial revolutions resulted in one or the other. Indeed, the introduction of those practices to certain natural and economic ecosystems led to both regional successes and catastrophes.

On Intelligence, the companion site to Jeff Hawkin’s provocative book by the same name. The book introduces the concept of Hierarchical Temporal Memory (HTM) based on a layered hierarchical model of how the neocortex functions.