Pages

Wednesday, June 27, 2007

Below is a quote from Ray Kurzweil's book The Singularity is Near. To put it in context: The singularity is a time when humanity as we know it will suddenly change drastically, due to advances in technology. For example, our brains will be enhanced by nonbiological computers, and we'll spend half our time in fully immersive virtual reality. Some of the major advances that will lead to this change are what Kurzweil refers to as "GNR", which stands not for the name of a band with a perpetually delayed album, but for "genetics, nanotechnology, and robotics." Here is the quote:

"The most powerful impending revolution is "R": human-level robots with their intelligence derived from our own but redesigned to far exceed human capabilities. R represents the most significant transformation, because intelligence is the most powerful "force" in the universe. Intelligence, if sufficiently advanced, is, well, smart enough to anticipate and overcome obstacles that stand in its path."

Is it just me, or is that terrifying? This isn't science fiction; Kurzweil actually believes this will happen in the not-to-distant future, and I'm inclined to agree with him. Yet it sounds like science fiction, and not happy utopian future science fiction, but The Matrix / Mad Max / Blade Runner / oops we destroyed the earth science fiction.

Sure, it could go either way. Maybe the obstacles standing in the path of these superhuman, superintelligent, and presumably supersized robots will be obstacles that overlap with humanity's: global warming, crime, obesity, premature baldness. But what if their obstacles are us? We with our dull neuron-based brains and squishy bodies?

I'm sure Kurzweil has speculation on how we'll prevent this from happening (I'm only halfway through the book). I just hope he doesn't underestimate the human race's ability to make extremely stupid decisions, or overlook the fact that when it comes to world-altering technology, it only takes a small group of sketchy people to get their hands on it to do great harm. Let's hope we can overcome that stuff, though, because virtual reality would be kickass, and I do like my squishy body.

8 comments:

The idea of a future where we are dominated by robotic overlords is nothing more than science fiction. Robots of that magnitude, which Kurzweil speaks, would be highly improbable given that computers and electronics can only react to situations. No matter how large storage capacities become, no matter how fast processing power gets, and no matter how many man hours are put into the algorithms of these machines, there is no chance they will be able to think ahead of us. They may only have responses to various situations. How fast and precise these reactions may be, however, is another story completely.

The electron is the base of most of all our current technology. Without a severe change in the way we look at future advancements, an impending apocalypse such as this will never see a reality.

The nanobiological computers in our heads sound rad though. Maybe that, in the head of an evil scientist like Dr. Wily could be the end...

I gotta disagree with you here. Humans are computers made of carbon. In your terms, we "just react" in the same way that computers do, except in more complicated an often messier ways. We're not far off from having computers powerful enough to simulate every function of the human brain (supercomputers are already close to being powerful enough to simulate every neuron). It may be made of silicon rather than carbon, but if can perform every function identically to that of a human, then what differentiates it from a human?

And the silicon thing is an advantage. Our brains are a mess of slow neurons that fire in a quasi-random fashion that usually works well enough for us to survive. Even today's computers are much faster (per capita) than our brains, and can flawlessly retain, retrieve, and transfer the contents of memory. So I don't think a robot of superhuman intelligence is necessarily science fiction.

I dunno about the timeline though. Kurzweil thinks this will happen in our lifetimes. I'm not so sure. We've made good advances in reverse-engineering our brains enough to recreate them with technology, but it's a very long way from being complete. Even considering exponential growth in technology, there's a shitload left to understand, and we may hit limits in the physical nature of computers and/or our brains' ability to understand itself that slow growth. It's exciting to wait and see, though.

I guess its just my opinion that no amount of code can replace what we consider cognition. Emotion is the major driving force behind every action, and that will never be duplicated - even with a 'bit for bit' copy.

You have to consider that there are major senses that bring upon different choices. Granted, if a robot were to detect the smell of burning wood, it perhaps would react with the "fire!" routine... but what if he smelled perfume, or a toasted bagel. It would react in the way that the programmer intended. It may 'remember' certain instances from the past; but it would be impossible for it to alter its decisions based on previous images.

That argument aside, we are always going to be limited to what programmers are capable of. Computers are no smarter than we are, they just perform faster.

I guess its just my opinion that no amount of code can replace what we consider cognition.

Why not? Unless there is something special about our biological brains (e.g. a soul, or some non-religious ghost in the machine), what's the difference between a brain carrying out its functions and a computer carrying out all the same functions?

Emotion is the major driving force behind every action, and that will never be duplicated - even with a 'bit for bit' copy.

I think emotion could be replicated in addition to other functions. Although subjectively it feels like something special and uniquely human, it's my belief that emotion is just another function of the brain and body. If we smell burning wood, the emotions from past experiences (or hard-wired instincts) kick in, and we are motivated to avoid the fire. Maybe hormones play a role, and computers don't have hormones, but all they really do is affect the computations in our brains that give rise to the conscious experience. All these things can be built into a sufficiently advanced computer.

Perhaps the serial nature of current computers is a limiting factor, sure. But what if we built a computer out of billions of parallel mini-machines that perform exactly the same functions as neurons?

we are always going to be limited to what programmers are capable of. Computers are no smarter than we are, they just perform faster.

Well, humans were "programmed" purely through natural selection. Genes are relatively simple, programming for chunks of protien in a certain arrangement..but not programming every specific function that the brain does.

I think that if we can get these basic patterns down, the rest will follow naturally. For example, we can set up a network that learns about language. Then it is exposed to language (any language), and eventually learns it. We don't need to specifically program it for "learn Chinese", but it peforms this intelligent function naturally due to the way it was constructed. This is stuff we can already do with current computers.

And if we replicate human intelligence in computers, except with faster and more efficient materials, they'll be smarter than us. Then they can start designing themselves to be even better.

Scary, but in my opinion these are things we'll have to deal with in the next century or so.

The amount of information a computer would need to process to exceed the human brain is astronomical. Just think about the 33 frames you are currently processing from your optical input alone and the resolution of those frames...Compare that to anything a machine can do and it becomes obvious that its going to take alot of moores law to drive us into extinction.

See you in 20,000 years...Mr Terminator.

I find that catastrophists abound in this age. And I dont really buy into any of it.

If we're right about the amount of information the human brain processes, supercomputers TODAY are almost powerful enough.

It's not like we're processing every detail of each of the 30 frames (or whatever) per second our eyes take in. It's more like "there are edges here", "there a splotch of colour here", etc., and this information is stitched together in our brain. Still a mind-boggling amount of info, but not so much that we'll never understand it.

I doubt it'll be the end of the world or anything, but I do think the possibility of super intelligent machines will arise soon.

And yeah, Transformers looks awesome 'n all, but some of the changes are pretty dumb. Poor mangled Megatron.

About Phronk

My name is Mike. Some people call me Phronk. I'm a person, and I live in London, Ontario, Canada. I write a lot, hence the blog, but also do a lot of other stuff, including: eating, reading, watching stuff on screens, sleeping, using web sites, and walking. I have a PhD in psychology, which is why I'm so smart and you have to call me "doctor." I research and analyze technology for a living. Now you know everything about me.