Tag Archives: Noam Chomsky

A couple of days ago I had the pleasure of listening to a talk by Ray Kurzweil at the Learning Without Frontiers 2012 conference. Kurzweil is a powerful, entertaining speaker. His talk ranged far beyond the narrow limits of his PowerPoint slides, and covered areas as diverse as his acquaintanceship with Noam Chomsky, his founding of the Singularity University, his numerous inventions and much else beside. But it was those PowerPoint slides that I found most interesting. Slide after slide showed evidence of the exponential increase in the power of information technology per unit currency. That increase has never paused over recent decades, and it shows no signs of abating any time soon. Moore’s Law in computing is just a special case of this exponential increase in the power of information technology. (This ‘Law’ is actually an observation first made in 1965 by Intel co-founder Gordon Moore. He noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965, and predicted that this trend would continue for years to come. Well, it was a pretty good prediction.)

Our lives will be transformed in the coming decades, in ways we can’t easily predict, because of the fact that different areas of science have now become information technologies and have, therefore, hopped onto that exponentially accelerating escalator. Think of human genetics, for example. Next year the technology used to analyse the human genome will twice as powerful as it is right now; a year later it will be four times more powerful; in three years’ time it will be eight times more powerful…

A page from the book of the human genome. One day biotechnologists will be rewriting this book, with consequences that are hard to foresee. (Credit: Rob Elliott)

The human brain isn’t very good at really ‘getting’ exponential increase. We have a gut understanding of linear increase, but not exponential increase. There’s probably a good reason for that: our distant ancestors lived in a worlds where they had to predict the future on a linear basis. (“If me and that lion continue our paths then we’ll meet in 20 seconds – I’d better head that way instead.”) Sometimes, even trained scientists don’t ‘get’ that difference between linear and exponential increases. They understand it at an intellectual level, of course, but typically they will vastly underestimate where technology will be in the near future. That was one of the clear points Kurzweil made, and it’s hard to disagree. In a few years time, the computing power that resides in an object the size of an iPhone will reside in something the size of a blood cell. We don’t know precisely how that technological miniaturisation will take place, but we can be pretty sure that it will happen. And what will that mean for all of us? We can only guess.

This idea of ever-accelerating technological advancement led Kurzweil and others to introduce and popularise the concept of a technological Singularity: a point in the not too distant future when advances in computing occur so rapidly, and computation becomes so powerful, that unaugmented human brains will be unable to comprehend the nature of these technologically transcendent ‘beings’.

Perhaps such a Singularity will happen. Perhaps not. But suppose it does happen. I was chatting to a couple of people at the conference who argued that this was the explanation for the Fermi paradox: we don’t see alien beings because they’ve merged with their technology, hit the Singularity, and become trancendent beings. I covered this argument in Where is Everybody? and, personally, I don’t see how it addresses the paradox. The question “where is everybody” applies just as well to transcendent machine intelligence as it does to biological intelligence. There’s no sign of either.