Posted
by
timothyon Wednesday May 06, 2009 @03:36PM
from the seems-to-invite-some-polite-skepticism dept.

destinyland writes "AI researcher Ben Goertzel peeks at the new Ray Kurzweil movie (Transcendent Man), and
gives it 'two nano-enhanced cyberthumbs way, way up!' But in an exchange with Kurzweil after the screening, Goertzel debates the
post-human future, asking whether individuality can survive in a machine-augmented brain.
The documentary covers radical futurism, but also includes alternate viewpoints.
'Would I build these machines, if I knew there was a strong chance they would destroy humanity?' asks evolvable hardware researcher Hugo de Garis. His answer? 'Yeah.'" Note, the movie is about Kurzweil and futurism, not by Kurzweil. Update: 05/06 20:57 GMT by T: Note, Singularity Hub has a review up, too.

... we'll be wrong. My own theory is that strong AI is the ultimate weapon and that it will never ever fall into the hands of the likes of you and me. Whether the machines get out of control is irrelevant; eventually the parties that control them will be slugging it out with weapons powerful enough to make life here hardly worth living. I expect to be dead before then, thankfully. But remember the first sentence of this post.

Funny you should mention Stewart. We saw him perform recently and he had a good talk about how the world will end.
He said that the end won't happen due to war or something liek a natural disaster. "The last thing we'll hear is some scientist saying "It works!"

..this story falls in the category of "sh#t that's never gonna happen".

I'm going to have to strongly disagree with you. I've been studying neuroscience for a while and specifically, neural simulations in software. Our knowledge of the brain is quite advanced. We're not on the cusp of sentient AI, but my honest opinion is that we're probably only a bit over a decade from it. Certainly no more than 2 decades from it.

There's been a neural prosthetic [wireheading.com] for at least 6 years already. Granted, it acts more as a DSP than a real hippocampus, but still, it's a major feat and it won't be long until a more faithful reproduction of the hippocampus can be done.

While there are still details about how various neural circuits are connected, this information will be figured out in the next 10 years. neuroscience research won't be the bottleneck for sentient AI, however. Computer tech will be. The brain contains tens to hundreds of trillions of synapses (synapses are really the "processing element" of the brain, more so than the neurons which number only in tens of billions). It's a massive amount of data. But 10-20 years from now, very feasible.

So, here's how computers get massively smarter than us really fast. 10-20 years AFTER the first sentient AIs are created, we'll have sentient AIs that can operate at tens to hundreds of times faster than real time. Now, imagine you create a group of "research brains" that all work together at hundreds of times real time. So in a year, for example, this group of "research brains" can do the thinking that would require a group of humans to spend at least a few hundred years doing. Add to that the fact that you can tweak the brains to make them better at math or other subjects and that you have complete control over their reward system (doing research could give them a heroin-like reward), and you're going to have super brains.

Once you accept the fact that sentient AI is inevitable, the next step, of super-intelligent AIs, is just as inevitable.

Pardon me... what the hell is "faster than real time"? Does that mean it comes up with the answers before you ask the question?

Faster than the human brain thinks.

IIRC, the human brain fires off at like 200 mhz. That may not be 100% accurate, I cannot recall where I read that factoid and a quick Google search doesn't collaborate -- but ultimately the specific numbers don't matter.

Assuming a brain does go at 200mhz... Once a simulated human brain goes faster than 200 mhz, by definition you have something that can think faster than a human.

Currently a cheap desktop will run at about 10-20 times faster than that, speaking in pure mhz.

Not to start asking hard questions or anything, but does simulating the brain really imply we can create sentient AI? What if there is more to it than that? Perhaps sentience can only arise as a result of our brains being "jump" started in some way (cosmic radiation, genetic preprograming or whatever)? To start the AI you would have to "copy" an existing brain or play with random starting states... Could be unpredictable. Irrational sentience anyone?

So, here's how computers get massively smarter than us really fast. 10-20 years AFTER the first sentient AIs are created, we'll have sentient AIs that can operate at tens to hundreds of times faster than real time. Now, imagine you create a group of "research brains" that all work together at hundreds of times real time. So in a year, for example, this group of "research brains" can do the thinking that would require a group of humans to spend at least a few hundred years doing.

Ah, but then you'll likely need tens to hundreds of times the input bandwidth to keep the processors cooking, yet, it seems information overload at a much smaller scale jams up current biological intelligences. Just like cube-square scaling applies firm limits to what genetic engineering can do to organisms, although cool stuff can be done inside those limits, some similar bandwidth vs storage vs processing scaling laws might or might not limit intelligence. Too little bandwidth makes insane hallucinations? Too much bandwidth will make something like ADD? Proportionally too little storage gives absent minded professor in the extreme, continually rediscovering what it forgot yesterday. I think there is too much faith that intelligence in general, or AI specifically, must be sane and always develops out of the basic requirements, because of course AI researchers are sane and their intelligence more or less developed out of their own basic biological abilities (as opposed to the developers becoming crazy couch potatoe fox-news watching zombies).

Then too, its useless to create average brain level AIs, even if they think really fast, even if there is a large group. All you'll get is myspace pages, but faster. Telling an average bus full of average people to think real hard, for a real long time, will not earn a nobel prize, any more than telling a bus full of women to make a baby in only two weeks will work. Clearly, giving high school drop outs a bunch of meth to make them "faster" doesn't make them much smarter. Clearly, placing a homeless person in a library doesn't make them smart. Without cultural support science doesn't happen, and is the culture of one AI computer more like a university or more like an inner city?

It's not much of an extension to tie the AI vs super intelligent AI competition in with contemporary battles over race and intelligence. Some people have a nearly religious belief that intelligence is an on/off switch and individuals or cultures whom are outliers above and below are just lucky or a temporary accident of history. Those people, of course, are fools. But they have to be battled thru as part of the research funding process.

I appreciate your insight, but I very strongly doubt it's just a matter of simulating a bunch of neurons. If we did, where's our strong AI bug simulation? You know, a bug that would learn to walk and eat without being programmed to do it? I think the problem is an algorithm problem, and "putting a whole bunch of identical (simulated) neurons together" doesn't seem like it's gonna cut it. I think the question is whether or not this is at all theoretically possible. I think you're being too quick at claiming

Mike Judge's vision of the future in "Idiocracy" seems much more likely.

On the issue of whether computer-enhanced humans are still "human" - what does that even mean? Genetically, "Human" is 98% chimpanzee, 50% dog, 30% daffodil, etc. (I'm sure I have the numbers wrong).

I think we tend to over-rate the concept of "humanity". Every thought or emotion you've ever had is merely your impression of sodium ions moving around in your brain. We process information. Computers do it. Chimpanzees do it. Dogs do it. Even daffodils do it. It is just not that special.

"Individuality" is an illusion. You may process information differently than I do. But you also process information at time x differently than you process information at time x+1. Because the "human" self is a manifestation of the brain, the human "self" changes with each thought. Consciousness is an instantaneous phenomenon and there is no continuity of "self". In effect, we have all "died" an infinite number of times.

Consciousness is an instantaneous phenomenon and there is no continuity of "self".

However, just because something ("Consciousness" in this case) is emergent and cannot be well described by the sum of the parts doesn't mean we shouldn't at least consider what these sorts of human/machine interfaces might do to our perception of self in the future if ever they exist. My prediction: as long as I can still enjoy a fine single malt - and some bacon from time to time I'll consider the future a smashing success.

Mike Judge's vision of the future in "Idiocracy" seems much more likely.

On the issue of whether computer-enhanced humans are still "human" - what does that even mean? Genetically, "Human" is 98% chimpanzee, 50% dog, 30% daffodil, etc. (I'm sure I have the numbers wrong).

I think we tend to over-rate the concept of "humanity". Every thought or emotion you've ever had is merely your impression of sodium ions moving around in your brain. We process information. Computers do it. Chimpanzees do it. Dogs do it. Even daffodils do it. It is just not that special.

"Individuality" is an illusion. You may process information differently than I do. But you also process information at time x differently than you process information at time x+1. Because the "human" self is a manifestation of the brain, the human "self" changes with each thought. Consciousness is an instantaneous phenomenon and there is no continuity of "self". In effect, we have all "died" an infinite number of times.

That's a bit overboard, I think. You're basically claiming (and I'm trying not to strawman you, here) that abstract concepts can't be used to identify patterns, but instead can only be used to identify identical things. There's plenty of reason for me to label myself at time=2009 and myself at time=2007 the same person, just as we label anything else that changes but maintains identifiable and distinct patterns.

As a scientist, individual identity seems like a common and accurate label for each person's idiosyncratic tendencies.

That's a bit overboard, I think. You're basically claiming (and I'm trying not to strawman you, here) that abstract concepts can't be used to identify patterns, but instead can only be used to identify identical things. There's plenty of reason for me to label myself at time=2009 and myself at time=2007 the same person, just as we label anything else that changes but maintains identifiable and distinct patterns.

As a scientist, individual identity seems like a common and accurate label for each person's idio

It gets more complicated when myself2030 and myself2032 are standing side by side. If myself2030 kills Joe Smith, and then commits suicide, is myself2032 partially responsible? 100%? 0%. With no legal link between selves, when a copy of myself can be made for $100, then murder-suicide of government officials, political people you disagree with becomes easy to do, and when your copy plans on suiciding makes it difficult to protect agent.

I aree with what you've said to a point. But consciousnesses don't mingle (at least, mine hasn't...), our consciousness remains locked to our individual brains and perception. If we do any sort of human brain networking, that could change. And that would be mind-bendingly weird.

I just saw an interview with him last night, where he discussed full power computers the size of a blood cell, us mapping out our minds for the good of all, etc. It reminded me of the utopian 1950s vision of the space age, where we'd all be floating around space circa 2001: Its not going to happen.First he's ignoring some physical limitations, such as with the size of computers, but that's not even the main issue. The main issue is that he's ignoring politics. He's ignoring the fact that technologies which comes into existence get used by existing power structures to perpetuate their rule, not necessarily "for the good of all". Mind reading technology he predicts won't be floating around for everybody to play with, it will be used by intelligence agencies to prop of regimes which will scan the brains of potential opposition, consolidating their rule. Quantum computers, given their code breaking potential, won't be in public hands either, but rather will strengthen surveillance operations of those who already do this stuff.

In other words, this technology won't make the past go away any more than the advent of the atom bomb made middle ages Islamic mujahadeen go away. Rather it will combine with current political realities to accentuate the ancient political realities of haves and have not that date back to ancient times.

You are missing some larger trends here. Its true that the Internet, GPS etc. Came from the military and went to civilian hands, but that was then, this is now. Our entire post 9/11 reality has been about "what happens when the middle ages guy gets the nukes" and the thinking about technology passing into civilian hands is changing dramatically with that. The other factor is moving from a time when more competition over resources is coming, we can rely less on limitless expansion. Call me a pessimist, but I

Watching TV shows from the 60's one thing strikes me: life is almost exactly like it was 40 years ago. I can now order books without talking to anyone. Big deal. The telephone was a much bigger deal than the Internet, and it's more than 100 years old. Here's more progress: people don't know their neighbors and can't let their kids wander the neighborhood.

Progress is slowing, not accelerating, and in some respects we're making negative progress.

Here's more progress: people don't know their neighbors and can't let their kids wander the neighborhood.

They may choose not to more now, but to the extent they do it is largely due to media-driven hysteria; while the actual incidence of the kinds of crime that are the focus of the fears behind that decision has declined while the perception of the incidence of those crimes has increased.

Thats because humans are still humans, not because technology hasn't involved at an rapid pace. Sure, cars still drive you from A to B, television still shows you the daily news and newspapers haven't really changed in a while, but on the other site I can buy for 100 bucks a device that can store two years of non-stop, 24/7 music, more music then I will likely ever listen to in my entire lifetime or be able to buy legally. For as little as ten bucks I can buy a finger nail sized storage device that can stor

The main issue is that he's ignoring politics...technology won't make the past go away any more than the advent of the atom bomb made middle ages Islamic mujahadeen go away. Rather it will combine with current political realities to accentuate the ancient political realities of haves and have not that date back to ancient times.

Interesting. We are the undermining factor, then, of our own progression.

government isn't run by supervillains looking to "perpetuate their rule".

Most of it will probably stay in militaryand academic circles for a little while, but that stuff always goes into the private sector eventually.

To which government are you referring? The sad reality is that it only takes one government to exploit a new technology negatively, and if it gives them the edge to do so, you can bet the US will follow suit, no matter how good are original intentions are. Looking at the way nuclear weapons have effected us over the last half century, I think I'm being pretty level headed in fearing new arms races and their effect on humanity: There is already so much historical precedent for that happening.

for my Moravec transfer. Although the more I think about it, I'm not sure that perceptible continuity of consciousness is such a big deal. I mean, I go to sleep every night and wake up the next day believing and feeling that I'm the same person that went to sleep. If there were a cutover to digital representation while I was "asleep" (i.e. unaware), I'm not sure I'd mind the thought of my organic representation being destroyed, even if it could have continued existence in parallel.

Yeah, this is a lot like how I think a matter transporter would work. Make a copy and then destroy the original. Star Trek makes it all look so clean, but you never get to see Skotty cleaning all the meaty corpses out from under the transporter pad.

That made me laugh and think of them taking the technology from Body Snatchers and adding a blinky light interface. I see life more as a vector and it may be pointing at the distant stars. I must agree with some others here and say that we will not get the benefit of these new technologies unless we create them for ourselves and maintain the right to use them freely.Mom! my USB drive is stuck in my ear again.

Is it death, or amnesia?
What if you knew you will wake up tomorrow with no recollection of today's experiences? Would you treat is as a death, or as a loss of one day?
I believe that in such situations the concept of 'death' needs to be revised.

While you're asleep your brain and body are engaged a massive set of synchronised, necessary metabolic activities and cognitive processed that are essential for "you" to exist. Proof? Eliminate sleep from a human and see how long before death or derangement ensues.

One lecture I had from a sleep biologist impressed me immensely. He was demonstrating all the different cycles that are engaged or differently regulated during human sleep. Then there were a bunch of comparitive

It's almost ludditism to say that machines 'will inevitability destroy humanity' or other such statements. Fears over the rise of AI makes for a good movie plot but much like the much feared 'grey goo' scenario, are unfounded. If and when indeed we have the technology level to produce a self replicating nano-machine that can be programmed to dismantle organic matter and it can exist on it's own gathering energy from it's environment rather than specific laboratory conditions (ie UV laser light as energy sou

> If Robert is 700 part Ultimate Brain and 1 part Robert; and> Ray is 700 parts SuperiorBrain and 1 part Ray... i.e.,> if the human portions of the post-Singularity cyborg beings> are minimal and relatively un-utilized... then, in what sense> will these creatures really be human?> In what sense will they really be Robert and Ray?

IMO, as long as there are enough cycles to run the 'ego subroutines' from the original bioform then the same sense of self will be maintained.

Ray Kurzweil, isn't he the Jon Katz of the transhumanist movement? I just remember there's supposed to be a couple of really good writers and philosophers and then one incredible douchebag that makes all of the rest look bad, someone who's approach to the topic is reminiscent of the very worst of Thomas Friedman (not to imply there's a best of Friedman.)

He's talking about genetic enhancement, nano technology, robotics, AI and more.And you "only" need one of these to reach a critical level for the Singularity to occur.For instance:*Genetically enhance humans to be better at genetically enhancing humans, rinse and repeat.*Make strong AI capable of creating stronger AI, etc

The singularity is the biggest embarrassment in futurism since the flying car and Martin Landau on the Moon by 1999. Well, OK, Gerry Anderson wasn't really a futurist, but you know what I mean. Mod me troll if you must, but you know in my hearts I am correct. Sorry, kids, but there won't be a reverse engineered version of your mind enjoying immortally in a machine somewhere.

Re-engineering biological systems takes generations to debug. And a huge number of dud individuals during the development process. This is fine for tomato R&D, but generating a big supply of failed post-humans is going to be unpopular. Just extending the human lifespan is likely to take generations to debug. It takes a century to find out if something worked.

AIs and robots don't have that problem.

What I suspect is going to happen is that we're going to get good AIs and robots, but they won't be cheaper than people. Suppose that an AI smarter than humans can be built, but it's the size of a server farm. In that case, the form the "singularity" may take is not augmented humans, but augmented corporations. The basic problem with companies is that no one person has the whole picture. But a machine could. If this happens, the machines will be in charge, simply because the machines can communicate and organize better.

Computers become smarter than humans. Human consciousness becomes downloadable...ermm...somehow... and we live forever as computers.

The sad part is that it seems like it's all wishful thinking on Kurzweil's part who's really scared of dying. So my bet is that his outlandish and baseless predictions are so popular because it fills a void in the "don't worry you won't really die" department that religions used to fill. So the whole Singularity thing really is a secular techno-cult of some sort, and Kurzweil is the guru and prophet.

I always thought of it more as a techno-rapture and that's the way I've seen it referred to in other places.

Even the most committed atheist can understand the attraction of religion and the idea of a rapture and a heaven, life everlasting. These are all very human yearnings. The difference between the idea of the religious and the techo-rapture is that the means of making it happen lie within our grasp. Certainly we could create the new heaven and new Earth and the reign of a thousand years right here and now. We have the technology, we have the knowledge, what we lack is the wisdom.

The poster who compares it with 1950's futurist utopianism is exactly right. We could have had the future depicted in 2001, we could have an end to world hunger, an end to disease, and if not an end to death then a comfortably long delay in its arrival. The problem is that we're still very human at heart and humans are not that far removed from the trees. We are selfish, grasping, petty animals and those few acts of sublime virtue from the best of us simply serve to make the rest of us look all the worse.

We've yet to develop a political system adequate to the task of promoting the greatest good for the greatest number without allowing unhealthy power and influence to be amassed by our least deserving fellows. Unfortunately, the very people who are most willing to acquire power are seldom the ones who should have it. The complaint I hear from my friends deeply involved with the Democrats is that there are plenty of good people they'd like to run as candidates but so many of them want nothing to do with politics. They're happy to put in the long hours behind the scenes but the thought of being in the spotlight and having all the attention on them is about as attractive a thought as a root canal. Someone actually willing to take that kind of attention is more than likely going to be someone like a John Edwards, a nice smile and slick approach but ultimately a self-serving jerk so blinded by his own awesomeness that he'd pull stupid shit like having an affair and then throwing his hat in the ring for the presidency.

I'm curious as to what the potential implication of a Singularity is for technology but I don't know if that would change the human situation all that much. There's been some good speculative fiction written along these lines in the Orion's Arm universe. It's trying to be a very hard SF look at future space opera. The few aliens are all completely inhuman, the humanoid aliens are actually all modified people from earth, terragen life as they call it. There's various scales that sophonts fall onto from sub-human to AI gods and all sorts of tech levels from stone-age to planck-age. It's certainly worth a look.

I found your post very well-thought, and an interesting read, but one note struck me as odd:

and humans are not that far removed from the trees. We are selfish, grasping, petty animals

What the hell do the trees look like where you live? They sound like they'd scare the *shit* out of me.

I assume you're funning with me here but if not... Chimps are our closest animal cousins and they're not all that nice. Sure, they'll make a few cute and kooky commercials but then they'll chew a lady's face off or cannibalize other chimp infants or do all sorts of horrible things. That's what I meant by saying we're not all that far removed from the trees, i.e. having come down from the trees, i.e. speciated from the common ancestor between modern man and modern chimp.

> If we were able to bring back a Neanderthal and he grew up in the lab interacting with scientists and a surrogate mother who would, of course, still be a human being, we'd probably appear more god-like than as simple father and mother figures. We have mysterious magic machines whose workings would be beyond him, move in mysterious ways.

Huh? You're not making any sense now. People a thousand years ago would find our machines magical too, but if we were to clone one of those people and raise them like a normal person in our time, there is no reason why such a person wouldn't accept (and understand) technology like everybody else does. Likewise, although your hypothetical neanderthal may have below-average intelligence, there is no reason to believe he would would worship our technology any more than a person with Down syndrome. If we assume he'd merely have below average intelligence without being retarded, the cloned neanderthal would probably own an iPod and enjoy it very much, even though he could never understand how it works (just like most humans).

How you view technology has to do with your culture, not with the time period your DNA comes from.

The difference between the idea of the religious and the techo-rapture is that the means of making it happen lie within our grasp... We have the technology, we have the knowledge, what we lack is the wisdom.

No, they aren't in our grasp, they aren't even close to being in our grasp. They're no more in our grasp than transmutation of lead into gold was within the grasp of alchemists -- we can describe conceptually what we would like to happen (we mix chemicals, lead turns to gold; we download our minds into

You perhaps forget that virtually all human advancement begins with 'wishful thinking'. This is a scientific problem. You have a human consciousness. In a secular, materialistic worldview, a human consciousness is nothing special. It's basically assumed to be nothing more than really obfuscated software running on a biological, carbon-based computer. Given that assumption, it is a natural step to find some way to copy it, intact and functioning, to a more resilient inorganic, silicon-based computer. The difference between this and all the various soul-based afterlife nonsense of religions should be obvious to anybody. This is a potentially plausible objective hypothetical physical/material process. It's an idea based on hard facts that may actually work given enough research, testing, and further advances in hardware and software design.

You perhaps forget that virtually all human advancement begins with 'wishful thinking'.

Yeah, that, that's a variation of the classical "they said Galileo was wrong when he was right, you say I'm wrong therefore I'm right" argument.

In a secular, materialistic worldview, a human consciousness is nothing special

Yeah because in a "secular, materialistic worldview", we know almost anything about pretty much anything. How does your "brains are computers" view explain such research [wikipedia.org]? Oh wait I forgot that our beloved aforementioned worldview consists in denying such things in the face of eviden

I would argue that if reincarnation is real, it underscores, not undermines, the possibility of transferring consciousness. If the natural/supernatural world does it already, than doing it artificially may again just be a matter of process.

"a nasty surprise"Somehow you're making the leap from "we don't know how now" to "when the visionary attempts X, he will fail". If we lived like that, we'd still be stoning the people showing us how to use fire. Not to mention, if it takes simulating an entire body to replicate a human digitally, so be it. It only takes more CPU to do that. CPU is cheap, and it's only going to get cheaper. Don't stand as an obstacle to progress, we'll keep going right over your head.

With minor paraphrasing, you pose the question "what if everything is impossible?"That's the stupidest question in the history of all luddites. Even if--and that's a massive if--it is provably infeasible to simulate an entire human, the research will be unimaginably valuable to any human. Brain prosthetics, broadband mind/machine interface, and safe treatments to target specific brain disorders are only the tiniest wedge of the foreseeable advances that sort of research can provide.Lastly, what "hardware li

The whole philosophy seems to smack of undying narcissism. It's ok to fear death; it's part of western culture, and key to survival. As we experience life individually and only marginally as a collective (civility as bad as it is), it's understandable that living forever seems like a good idea. We're here as an accident of our birth. Disembodied, we might evolve, but we're not designed for 400 years of life. Who knows what kind of cyber-insanity might evolve. I'm leaving it up to my kids to figure it out, a

I have no problem with wishful thinking, as long as it drives some kind of innovation. However, it was the point where Kurzweil revealed that dead people could be brought back to life by feeding their biography into a database, that's when I started to get this nagging feeling that I probably know more about neuro computing than he does. Which is kind of discouraging.

Also, judging from the trailer, this is going to be a movie about religion. Kurzweil's philosophy is pitted against religious belief probably

I agree. It may be sad and creepy, but the really bad part of it is that he apparently lacks any kind of understanding of what actually makes up the mind of a person. A mind is not the sum of epiphenomenal output data.

Sure, you can try to simulate something that is more or less likely to give you responses similar to known input patterns, but that is not what constitutes a person.

What you could then do to make it a person is feed that list of "expectations" into some kind of default brain, thereby filling in the many blanks with an actual neurological structure that can perform real cognition and exhibit consciousness. BUT - and here's the essence of the problem - all you did in the end was to create a new person that exhibits some of the traits of the dead person. In no way or form has the dead guy come back to life.

I think modeling and then enslaving an AI to perform like your long-dead father is morally questionable at best. It shows that in the end he has no regard for neither the beloved person who regretfully ceased to exist nor for the new slave entity that is forced to perform a perpetual make-believe job on his behalf.

Scientifically, the problem is entropy and the passage of time. Everything needed to "run" the entity that was his father is lost to decay and cannot be restored - barring a way to accurately retrieve molecular structures from arbitrary points in the past.

Indeed, the sad thing is (well, yet another of those sad things), you can't hear about the Singularity without hearing about Kurzweil, you can't hear about Strong AI (which may or may not be possible, what do we know?) without hearing of the Singularity, and you can't discuss AI without strong AI popping up.

So at the centre of this entire field of research you have that guy and his crazy ideas hogging up all the attention, and I'm afraid that he's only going to bring discredit to the discipline, just like a

Look at it this way, when I read the newspaper (or rather, the news website) and see words like "as a result of the accident, the child will be blind for the rest of his life". The first thing that pops into my head is that he won't be blind for the rest of his life, he'll be blind until we find a way to give him his sight back.

If the kid lost his retina, we can already fix that to some extent with a transplant. If the kid had his optic nerve destroyed, that might be a couple years for us to fix, maybe e

The key here is that Ray bases this prediction on past observation of things like Moore's law. Even though he does cherry pick and that there is no guarantee that it would always continue in such a fashion, the idea that distributed system improvements are exponential isn't that far fetched.

So basically what he is saying is that if the future behaves like the past then we will see so major changes shortly simply because we'll have processing out the wazoo.

Even the Moore himself thinks this will at least last til 2018 when silicon transistors reach their theoretical limit on the atomic scale. Whether or not the industry finds a suitable replacement for silicon or finds another way to go about making processors is another thing all together.

My bet is that Intel, IBM, and AMD are putting the big bucks on getting past the silicon limit because that is their money cow.

So if the limit does continue that things like Blue Brain Project [wikipedia.org] will have an easier time running their simulations.

I don't know about the whole Nanotech emergence, but at least it looks like we might get the AI thing solved in at least 50 years.

Kurzweil's predictions aren't just based on modern trends but historical shifts as well. In fact, I thought one of the big pieces he shows is a graph of 'paradigm shifting events' against time. These would be technologies that changed everything at the time; things like agriculture, the printing press, nuclear power, the transistor, etc.

The point isn't the gradual improvement of transistor technology that make the singularaty interesting, it's that transistors will be old news in 20 years; replaced by some new technology that we can't even speculate about right now. It's about the shifts, not the gradual evolution.

No, his argument is that lots of cool stuff happened in the past, and the cool stuff is happening more and more rapidly as time goes on. Basically, each major 'cool thing' that happens increases the amount of processing power being used to solve the next problem and create the next cool thing.

Agriculture led to a massive population increase that in turn led to more human beings working to solve problems. Iron tools reduced the time it took to do tasks and freed up more time for other pursuits. The printing press led to the education of vast numbers of people who would otherwise have remained ignorant. Computers aid research in ways that no one could have imagined 70 years ago.

If you grant that progress is happing at an accellerating rate, there comes a time in the future where things change dramatically in very short periods of time. If you chose to call that point "OMG ponnies!!!!!" so be it.

Yes but his argument is still flawed even if you refine it slightly. There are many problems with his assumption, but even one is enough to derail it:

Assume that each previous advance multiplies the amount of result for a given effort. You only get accelerating returns when the growth in required effort is below a critical threshold. For certain previous advances, and certain successive problems this has been true.

It does not imply that it always holds, or that it will continue to hold in the future, or even that it holds for any particular problem. "OMG ponies!" doesn't refer to any amount of progress - it refers to a lack of understanding of what a given problem is, and how much effort is required. Perhaps Arthur C. Clarke phrased it better when he called it magic.

20 years ago, I had a disagreement with my then biophysics prof when I advocated the use of large networks of PC clusters for studying protein folding and interactions. His line of argument was effectively that I had a lack of understanding of what the problem is, and how much effort is required. Today companies like Zymeworks [zymeworks.com]specialize in performing that kind of work for pharmaceutical companies on a contract basis. They use quantum chemistry simulations running on small clusters of commodity hardware to d

Yes, but 20 years ago a computer network was not a hypothetical then-impossible idea. Before the first computer network existed, people understood what technological barriers they would have to overcome to create one, and they already knew how to split a task into multiple parts on separate processing units. It was an engineering problem. It was the engineering problem that your professor was stuck on. Call me when the major obstacle to any of these Futurist p

Yes, but 20 years ago a computer network was not a hypothetical then-impossible idea. Before the first computer network existed, people understood what technological barriers they would have to overcome to create one, and they already knew how to split a task into multiple parts on separate processing units. It was an engineering problem. It was the engineering problem that your professor was stuck on.

Agreed.

Call me when the major obstacle to any of these Futurist predictions is the amount of effort requ

Clearly you can have a "human mind's worth of computing power" run on only 100W or so. However, it's unclear whether you could run an emulation of a human mind on any reasonable amount of power. Or, for that matter, at all. As yet, there's not the least shred of evidence that either AI or human consciousness transfer is possible.

AI has been 50 years away for 50 years now. Fusion has been 20 years away for 50 years now. I can only conclude that fusion will be a mature, 30-year-old technology, ready to power AIs.:)

Personally, I think that software consciousness will turn out to be quite easy in hindsight, just a matter of learning the trick, but I have no actual evidence for this belief. Has any published futurist ever been right about anything?

Personally, I think that software consciousness will turn out to be quite easy in hindsight,

Agreed. It's going to be an 'everything-and-the-kitchen-sink' kind of problem. Put enough of the right systems together, and it will emerge rather on its own.

The problem isn't going to be creating an artificial intelligence. The problem is going to be in making it an autonomous agent that can be socially integrated into society. Think how long it takes to raise a kid... teaching the kid language, potty trai

IAACE (I am a Computer Engineer). I agree transistors will not be old news in 20 years, but i think you're looking too broadly. I believe the idea that they will be old news relates to their use in (high performance) computing. It really was from about the 1980's till now, around 20-30 years, for computers to get *really* popular.

Photonic Computing is really in the stage where transistors were in the 60's and 70's. We already have proven concepts and a good idea of where to go so i don't see the statement "

The key here is that Ray bases this prediction on past observation of things like Moore's law. Even though he does cherry pick and that there is no guarantee that it would always continue in such a fashion, the idea that distributed system improvements are exponential isn't that far fetched.

Since there are physical limits involved, it would intuitively seem vastly more plausible to suggest that the improvements would, in the long term, be logistic rather than exponential (and, of course, a logistic growth c

I think we'd know now if another technology would supplant the transistor within 10 years. Indeed, our progress may slow as we approach this limit, i.e. Moore's law will slow down and 2018 is too soon. Evolution frequently just stops within domain, like how marsupials just can't evolve flippers. But that doesn't mean evolution stops overall.

2) algorithms -- You can always just train more scientists and mathematicians to write more & better parallel algorithms. You may also fold these advancements back into compiler design for high level language compilers, like say Haskell.

3) subsidies redirection -- You can redirect all government subsidies towards helping young but solid technologies catch up, underwriting 1/2 the cost of optical fabs for example. How much money gets waisted on farmers now?

4) smarter people -- You can try making smarter people through genetic engineering, pharmacology, and even research into education.

5) augmented people -- You can definitely augment people to improve specific tasks. If you augment children, you might change even more, like their will to do science.

6) clustered people -- You can make neurologically linked "people clusters" who think together towards some common goal, enabling you to solve harder math & science problems.

Moore's law is fundamentally flawed in that it predicts a never ending exponential (linear in the log domain) progression. It is bound to slow down and eventually stop, yet it fails entirely to take that into account.

What I think is that instead of being linear (well, actually exponential) it's more like a Gaussian function (a bell-shaped curve). It started far in the negatives, and now we're getting closer to the centre and its maximum, so we're feeling the slow down, and eventually it'll crawl to a halt. Although maybe it won't and then it'd be more like another function, the point being, it can't go on exponentially like this forever.

All of this being said, I think that Kurzweil's predictions are not flawed in that we'll have a tough time accessing the necessary hardware, but it's more theoretical, we have no fucking clue how we'd make any of that happen, right now it's a problem of theory and algorithms, not of computer power. We know better how to make time travel happen than how to make strong AI pop up.

Actually, I'm pretty sure with time travel I could fairly trivially build about the strongest AI possible. When you can perform an infinite number of operations in an arbitrarily short amount of time, quite a stupid algorithm can produce some pretty smart results.

Yeah, sure. But someone wake me up when we come up with an even stupid strong AI. Or any idea how to travel back in time.

Strong AI is our era's flying car, 50 years from now we'll think to ourselves "well that shit never happened, on the other hand the other stuff we have that we didn't see coming we wouldn't want to go back to living without it".

Moore's law is fundamentally flawed in that it predicts a never ending exponential (linear in the log domain) progression. It is bound to slow down and eventually stop, yet it fails entirely to take that into account.

That said, Intel still takes the idea deadly seriously when it comes to their marketing and future plans.

And as I said in other posts, I think that before wondering how many transistors we'll need for that Singularity thing, I think we should wonder what we'd do with these transistors to begin with. Not like having an immensely powerful computer will make sentient beings pop out of thin air.