Posted
by
Zonkon Saturday February 16, 2008 @11:24PM
from the i-need-me-an-implanted-robot-buddy dept.

Gerard Boyers writes "Some members of the US National Academy of Engineering have predicted that Artificial Intelligence will reach the level of humans in around 20 years. Ray Kurzweil leads the charge: 'We will have both the hardware and the software to achieve human level artificial intelligence with the broad suppleness of human intelligence including our emotional intelligence by 2029. We're already a human machine civilization, we use our technology to expand our physical and mental horizons and this will be a further extension of that. We'll have intelligent nanobots go into our brains through the capillaries and interact directly with our biological neurons.' Mr Kurzweil is one of 18 influential thinkers, and a gentleman we've discussedpreviously. He was chosen to identify the great technological challenges facing humanity in the 21st century by the US National Academy of Engineering. The experts include Google founder Larry Page and genome pioneer Dr Craig Venter."

Odds are extremely good for beyond human AI, given no restrictions on initial and early form factor. I say this because thus far, we've discovered nothing whatsoever that is non-reproducible about the brain's structure and function, all that has to happen here is for that trend to continue; and given that nowhere in nature, at any scale remotely similar to the range that includes particles, cells and animals, have we discovered anything that appears to follow an unknowable set of rules, the odds of finding anything like that in the brain, that is, something we can't simulate or emulate with 100% functional veracity, are just about zero.

Odds are downright terrible for "intelligent nanobots", we might have hardware that can do what a cell can do, that is, hunt for (possibly a series of) chemical cues and latch on to them, then deliver the payload -- perhaps repeatedly in the case of disease-fighting designs -- but putting intelligence into something on the nanoscale is a challenge of an entirely different sort that we have not even begun to move down the road on; if this is to be accomplished, the intelligence won't be "in" the nano bot, it'll be a telepresence for an external unit (and we're nowhere down *that* road, either -- nanoscale sensors and transceivers are the target, we're more at the level of Look, Martha, a GEAR! A Pseudo-Flagellum!)

The problem with hand-waving -- even when you're Ray Kurzweil, whom I respect enormously -- is that one wave out of many can include a technology that never develops, and your whole creation comes crashing down.

I'll go Ray one better. We will have this before 2029. My company is working to bring to market synthetic intelligences that among other things have feelings, emotion, and mood, and understand human emotion. One may say, why try to do this? Because in order to understand people, a synthetic intelligence must understand these things, which serve real functions in people, and real functions in AIs trying to operate in a people world.

Note that it is not necessary to build 'perfect robots'. People think, and yet they are not perfect. They make mistakes, yet navigate through life. So we do not have to make flawless logic brains. The way people work is that we try to find good if not optimal solutions to problems, but we do not always exhaustively search for the perfect solution. Thus many problems in life can be solved in different ways than you would expect. We do not have to build a machine that finds the optimal solution to a traveling salesman problem in order to make a system that can walk from the kitchen to the front door. It just has to be able to get there reasonably optimally. Also, we do not have to replicate the human brain in order to think much like a human, we merely have to come up with functional systems that can provide similar functions. For instance, the human brain has the amygdala, which can be likened to an interrupt controller for emotional responses. Well, that functionality can be done in a hardware-software system that reasons about priorities of tasks and goals depending on their current 'value' of urgency to the 'brain'.

Many current researchers in many cases are missing the mark. For example, as good as it is, the widely-used AI textbook by Stuart Russell and Peter Norvig (who heads research at Google) has major omissions. It does not dwell on key things needed to bridge between AI and human psychology. Other things like the OCC model of emotion used in AI is incomplete and incorrect in parts. A new approach has been needed, and one I've been developing for decades in stealth mode. I'm writing a 5-volume book set on it. I want it to be the Knuth set of AI.

I'm in the process of patenting the mechanizations of my underlying technologies, and trying to cut deals with companies making multicore processors so their architectures support the thread swapping needed to make virtual neural nets practical. Once we get a 1024 simplified-core processor that supports virtual NNs, it'll be a lot easier to build a machine with many of these that does for NNs what disk swapping does for OSs, than to build a billion-neural processor hardwired machine. And easier to do visual perception systems properly too. So Ray is right. If I can drive certain companies to build the right silicon, we can get there by or before 2029. My current software does what I said, but it's too slow on current hardware. Needs new processors and new system architectures, and it will take 20 years to get the infrastructure all built up. But not a lot more.

My current software does what I said, but it's too slow on current hardware.

That's a *huge* claim; if it is true, you have AI now. Because -- as I explained in a previous post in this thread -- speed is absolutely irrelevant. If you can demonstrate your claim that your software operates now, no matter *how* slowly it operates, you are at the end of your funding issues, not to mention any other issues you may face in life. Which -- to be frank -- is why I doubt your claim. At the point you explicitly claim to be at, I'd already own a mega-yacht and be pulling up next to a lot of potential love.

But good luck, and I really mean that. I'd much rather be wrong and see you bring this right to the table, even if you have completely blown the financial potentials of the development process.

It's okay to doubt it; until I demo pieces to people, they don't believe it either. I do have strong AI right now running on a Von Neuman class processor, but AI really needs a massively parallel architecture. Speed IS relevant. You can't just run big wide NNs simulated in a Von Neuman, it takes forever. The best architecture I find is where processing and recognition are the same as memory. That is, once the AI learns something, the pattern recognizer also serves as the memory and the belief logic. In a way, the NNs ARE the data storage, there is no separate RAM. And my AI is implemented in a connectionist architecture that embodies symbolic processing in a new way. The knowledge storage is synonymous with the logic. But this requires massive though simple logic units and parallelism. Further, it is not completely self-organizing from the ground up, I view that as a hopeless approach.

I define consciousness and awareness within a pre-determined architecture, not entirely self-organizing from scratch. The visual front-end in particular is very rigid, but I think it is okay because there is little need for an organism to self-evolve completely new architectures, but rather to be able to run cascaded pattern-recognizers. See the work of Biederman for examples of this. The VFE feeds deeper processing doing cognition, and there is feedback from that to the VFE for training. Just as a baby learns to see, and recognize shapes, and build up from that. The front end is trained as the cognitive end learns and grows too.

AI does not spontaneously rise alone from massive databases, either. I view that approach as useless and a false trail. However, human intelligence does depend on belief systems and knowledge, and those continually grow as we mature from infancy. But to create the equivalent of an 18 year old, you have to have what amounts to 18 years of accumulation of knowledge about the world, and draw upon that. And there is a key but proprietary subtlety about that I'm devoting an entire volume to, that is the key to humanlike AI. That volume is essentially a doctoral thesis about consciousness reworked for use by a design staff. As for funding, no yachts yet. But I'm real, sane, and not a charlatan, and have explained my technology to my patent attorney. I expect to be hiring staff within two years. I posted on Slashdot not for glory but to counter all the nay-sayers who haven't a clue what is achievable.

Usually when someone makes such fantastic claims, like being very close to cracking AI, or trying to become AI's Don Knuth, the person is either clearly trying to be ironic, or leaves the distinct impression of being a bit unhinged.

As you seem to be both sincere and making a lot of sense, I have a message for you:

Yes, but what do they mean by "human level intelligence", in particular, which human are we talking about? I mean, if "human level intelligence" means "as smart as George W. Bush", then I wouldn't trust that machine to handle my taxes, let alone any really critical tasks.

Speaking as someone with a PhD in AI, I'm very, very skeptical about having human-level AI by 2029.

Whatever definition of intelligence you choose, it probably includes learning and reasoning components. We have some effective learning algorithms, provided your domain is very specific and you have boat loads of training data. We have next to no good reasoning algorithms. Complete search is a dead duck and incomplete search is not very reliable. Worse, search algorithms get seriously confused when the data base is inconsistent (humans are good at maintaining several incompatible world models simultaneously). And that's all before you consider that we have no psychological models of human reasoning that are anywhere near being specific enough to guide an implementation project (please don't mention "Society of Mind"). Finally, there is precious little funding out there for this kind of research, which is a shame, but there you go.

That's probably because we have discovered little about the brain's structure and function.

No, it is *probably not*. It *may* be, but since *nothing else* has presented us with that kind of problem, the odds of the brain doing so are pretty darned slim. You are postulating a heretofore never-achieved discovery in the course of determining how a mundane (by every indication) biological system, constrained as far as we know by the same physics and chemistry everything else is, operates. Considering the *f

Odds are extremely good for beyond human AI, given no restrictions on initial and early form factor. I say this because thus far, we've discovered nothing whatsoever that is non-reproducible about the brain's structure and function, all that has to happen here is for that trend to continue; and given that nowhere in nature, at any scale remotely similar to the range that includes particles, cells and animals, have we discovered anything that appears to follow an unknowable set of rules, the odds of finding anything like that in the brain, that is, something we can't simulate or emulate with 100% functional veracity, are just about zero.

Sure if you build in enough memory and processing power the bottle neck ends up being the designers, but it'll be a really long time before the hardware gets to the point where that's possible.

At present hardware will crash if a few bits get in the wrong places or if they're stored incorrectly, one of the things about organic lifeforms is that our consciousness doesn't cease to exist if one of our neurons misfires, at worst we get a seizure or possibly a hallucination. Any machine that's going to surpass us would have to turn those wrong bits into something meaningful without human intervention. Even if they are just unexpected rather than outright wrong.

It may very well be that computer technology will solve that problem, but quite a bit of what we are comes from these random misfirings and unpredictable unreliable results. Modeling what humans are presently like, or even modeling what humans are like at the point when this becomes realistic is far easier than creating something that will outdo us by intellect.

I'm somewhat skeptical when you say that nothing a person's brain can do which cannot be modeled by software, when it comes to talking, moving building, following instructions and things of that nature, I see no reason why a machine couldn't be taught to do those things as well as we do. But when it comes to more subtle things, things which require creativity, sometimes things which require for a deliberate violation of typical common sense, I'm skeptical that a machine could be taught to do so.

I'm especially skeptical about that considering that we don't even know most of the things which the human brain does or how it does it. We know many things, and we know enough to greatly benefit ourselves, but there are still a fair number of things which we don't understand about the brain. It is not a simple organ to understand, just in the last 10 years the amount of information gained about it is sufficient for me to suggest that you shouldn't suggest that there isn't a part of the brain which cannot be simulated.

I really don't want to suggest that it is impossible for us to create something that surpasses our own selves, but doing so would require things which we haven't even dreamed up yet, teaching a computer AI to be capable of meaningful creativity isn't something which is yet even on the most distant horizon, none of the programming languages or tool kits that are available presently offer that sort of capability in anything which resembles a reasonable number of lines of code.

Odds? If only these AI dweebs would pay off their debts for the bets they made for the exact same thing 30 years ago and 20 years ago. Human level AI is always "20 years in the future". But the only AI we have now is one that can process simple information that it has gathered from a strict set of rules (and then it's already much better than we are). Wake me up when an AI knows what it's like to be hungry (a fairly basic experience needed for understanding people, and an AI that doesn't isn't "human-level"

Let's imagine that computer "processing power" doubles every 2 years for the next 20 years, from a combination of hardware advances and software algorithm innovation. That's not quite Moore's law, and it's not really likely to work smoothly just like that, but just take it as one possibility. In that case, computers of 2029 will be 1024 times as powerful as today. So the question is, are human brains = 1000 times as powerful as a mouse's brain?

Maybe they aren't. But when you say a few centuries, I can't agree anymore. Let's imagine one century. Now we're hitting 1.12589991 × 10^15 times. A human brain is CERTAINLY within that complexity range. The caveat here is can we maintain the doubling rate for a full century? Well...Ray thinks we'll do far better than that (his "law of accelerating returns"), I'm not convinced we'll even be able to sustain the rate -- I think honestly we're looking at a plateau maybe 10, 20 years down the line, and will look back at computing as an S-curve until the next big breakthrough which nobody can predict. In my view the last couple "next big breakthrough"s happened at convenient times to make it look like we weren't following an S curve but we're just getting sharper and sharper, but I don't see any reason why the next one should happen just as conveniently. But since it's unpredictable, I could blindsided by it and it could happen next week.

Language isn't far off at all, we just about have it already. Emotions are nebulous and some people will move the goalposts forever, while some may prematurely be convinced by a video game character. I'm not necessarily convinced they are the hardest part of this. I don't know how to make them, but I don't know how to do the rest of this too. I just often see emotion being listed as the be all and end all most difficult task and I've never seen any reason to believe that to be so.

Generally reasonable. What you're ignoring is parallelism. Parallel algorithms have been generally ignored up until now, but now that CPUs with multiple simultaneous threads of execution are available, and even occasional multi-processor machines (PS/3?) are available, more work is going to go into them. We're just at the start of that range of development, so I don't see the S curve trailing off at anything like the rate that you do.What I see happening is that the current kind of development will slow

Wasn't there a simulation of a mouse's brain, or a few cells of it, for a few seconds with the help of a modern supercomputer, we can barley manage to do that.

Well, let's look at the rate of general progress in computing. In 1971, we were putting 2300 transistors on a chip. They ran at a few hundred KHz. In a fairly smooth progression, we've gotten to 3 GHz, where we're likely to stay, and today, we're at about two billion transistors on a chip, with no end in sight as to how far that can go. This is not Moore's law; Moore's law is about how many fit into a particular space; this is about how many can be integrated into a functional unit. That's 36 years. Thirty six years from now, that ability to "simulate a few cells" should grow just in the *normal* scheme of things into an ability to simulate a billion or so cells without any trouble. But there's more to this. Not everything in a cell needs to be simulated; for instance, metabolic processes such as waste generation and removal don't, nor do breakdown, aging, impacts by free radicals, all of that. Part of what needs to be done between here and the goal is streamline the simulation so that it is operating in the zone of mentation and not biological imperatives. I suspect, and yes indeed this is just my opinion, that the simulation will be much easier when we understand just what it is we need to simulate.

This all leaves out the issue of non-simulating intelligence, where the thinking is not patterned after human mechanisms; this could arise from evolutionary software or something along those lines. And of course, one of the reasons that all this is kind of a holy grail anyway, only the first intelligence is difficult; the second... Nth is just a matter of copying a machine state.

As for language, that's solved in the I/o sense -- synthesis and "listening" are both satisfactorily complete. Intelligent discussion can only be expected from an intelligent machine, so that's only as far away as machine intelligence is.

Even if an intelligent computer was somehow created it would be an enormous accomplishment to have it be as intelligent as a bug or a small animal.

Small animals, I'm of the opinion, are a lot more intelligent than most people give them credit for. They just have a different intelligence. I am sure that we will go through the small animal level on the way to our level, and beyond; the thing is, if you can do the one, you can do the other. There's no indication of a significant difference in the wetware, there's just more of it and it is arranged somewhat differently. No reason to expect anything different from hardware designed to do the same job.

Emotions and language seem very far off, I'd say such a thing is centuries away.

Why? Small animals do both. Those aren't even the hard things. The hard things are introspection and self-awareness. Those are the ones we have not even a theory for, today. In any case, your ideas are certainly in with a lot of good company; but not me. I think we're only one discovery - algorithmic in nature - from AI. Self-awareness may turn out to be a property that self-organizes and arises without any special prodding from us; that would be marvelous, not to mention fortuitous, but hardly impossible - again, that's how nature did it.

Here's why I think we're just an algorithm away. If you left a question that absolutely required intelligence on a counter, and went back to pick it up the next day, and the answer was there -- you would agree that an intelligence had answered the question. If a human could answer it in one second, or an AI could answer it in 23 hours, it's still just as intelligent an answer when you pick it up. The point is that speed really isn't the issue. The issue is the process, that is, the algorithm. So it turns out that in terms of speed, number of transistors, etc, that's really not the limiting factor for developing intelligen

Although I have mod points at the moment it seems that your comments have already been modded up anyway so I guess I'll reply. We've actually argued about this before (probably last time Kurzweil was mentioned on slashdot), but as you say in a different comment this discussion is always fun:)

Slight nitpick, currently we are at one billion transistors on a chip, not two, but that doesn't really change the point you are making.

A bigger issue that I have with what you've described is that simulating a brain is not the same as "solving" AI. The problem that Kurzweil has is that he refuses to accept that there is a difference. Sure, if they are the same then strong AI is inevitable and it's merely a question of building fast enough hardware. But why assume that they are the same thing?

Twenty years from now we may have hardware that can simulate an entire human brain; and yet we may be no closer to solving any of the problems in understanding how to solve the many problems in AI. The mental sleight-of-hand that Kurzweil applies to this argument is: Once we can simulate a brain we have AI, therefore the AI can design he next generation, therefore we will reach the singularity. This argument is a logical fallacy because it assumes being able to run the system, and knowing how to design the system / how the system works are equivalent.

Everything that we know complex and dynamic systems tells us that this is not so; given a simulation of the brain it is reasonable to assume that intelligence is the ultimate emergent property of the system. Understanding this property and how to refine it is the hardest problem that mankind has ever undertaken. Currently we don't really know how to pose the question, let alone how to arrive at an answer. To assume that some kind of standard engineering methodology will solve this in 20 years is wild speculation.

As always with AI, the hardware will be available but nobody yets knows if we can write the software to run on it.

There's no indication of a significant difference in the wetware, there's just more of it and it is arranged somewhat differently.

Do you realize how wrong that statement is? There is a tremendous difference in quality (not quantity) between a normal animal, like a young Chimpanzee, and a young human with about the same mental volume. (OK. Make it a young gorilla.) The difference is that humans are wired to communicate symbolically with GRAMMAR!! Chimpanzees can "sort of" learn grammar. But they don't b

but humans are the only creature that has ever been scientifically shown to have anything like language.

That is incorrect. Language is the ability to communicate feelings, goals, results. It is not "speech." Some birds do indeed have the capability of speech, that is, they can make the same sounds we can, closely enough as to make no difference. Apes, however, have demonstrated actual communications using symbols, and even dogs have recently been found to have a consistent, though very small, vocabulary

but humans are the only creature that has ever been scientifically shown to have anything like language.

That is incorrect. Language is the ability to communicate feelings, goals, results. It is not "speech." Some birds do indeed have the capability of speech, that is, they can make the same sounds we can, closely enough as to make no difference. Apes, however, have demonstrated actual communications using symbols, and even dogs have recently been found to have a consistent, though very small, vocabulary. Elephants and other animals have demonstrated the ability to think in the abstract (the "recognize one's self in the mirror and operate on the information thus provided experiments.) Lemurs use calls to communicate safety and status. Don't confuse the lack of vocal apparatus with an inability to communicate. They're not the same thing at all.

As for the rest, I think you've got it, essentially, but we disagree on scales. We'll see.

Er... no.

Language is much more than that: it is a system of symbols that can even be used to describe any other symbolic system, and which can be extended at need and at will; animal communication shows little or no indication of that.

Nobody in their reight mind would deny that animals can communicate, and even that they can communicate very well.
However, that alone does not make them capable of using a language.

The cognitive leap a simple verbing of a noun requires is beyond any other animal.

Wasn't there a simulation of a mouse's brain, or a few cells of it, for a few seconds with the help of a modern supercomputer, we can barley manage to do that.

We already know that it's possible to contain 100% of real-time human brain functions in a casing 10cm by 10cm by 10cm and weighing under five pounds. Now we have to build one from the ground up with potentially slower, yet better understood technology. The problem, unfortunately, isn't related to hardware. I have no doubt that processor power will soon be sufficient for our needs, but without software that can think on the level of a human, it's just another personal platform to play Duke Nukem Forever on.

Most of us know jack about the algorithms that allow us to catch a baseball in flight, yet we can still do it. Furthermore, a person from 10000 BC with no math at all by today's standards could do it just as well as we can. Implementing solutions does not always require a complete understanding of what you've done. You can even be wrong and it'll still work for other reasons. So hard-pegging this to what we "know" could be a severe error.

And no, simply copying the brain structure will not the answer.

That's a very bold statement, especially since (a) that's the way nature does it for all its intelligences, high and low, so we know the process works in the general case, and (b) as you say, we don't know many things yet, so claiming that we "know" what won't work seems to be disingenuous or at the very least not well thought out.

I think it is important not to conflate the fact that we don't understand something with the idea that it will be difficult once figured out or discovered as a consequence of some fortuitous sequence of events. That's been shown again and again not to be the case. It *may* be so, but it is by no means certain to be so, and for that matter, it isn't indicated by the complexity of the brain's hardware. The brain is considerably more formidable as a mass of immensely complex moderated connectivity than it is as a collection of cellular-level mystery machines, and a good deal of the complexity at the cellular level is almost certainly irrelevant to the task of thought -- keeping the cell alive is probably in no way related to non-pathological mental operation, yet there's a lot of hardware and systems involved in the task.

"The brain is considerably more formidable as a mass of immensely complex moderated connectivity than it is as a collection of cellular-level mystery machines, and a good deal of the complexity at the cellular level is almost certainly irrelevant to the task of thought -- keeping the cell alive is probably in no way related to non-pathological mental operation, yet there's a lot of hardware and systems involved in the task."

You (and most proponents of AI) have failed to answer any of the philosophical/me

That's a very bold statement, especially since (a) that's the way nature does it for all its intelligences, high and low, so we know the process works in the general case, and (b) as you say, we don't know many things yet, so claiming that we "know" what won't work seems to be disingenuous or at the very least not well thought out.

Copying the structure of the brain in all particulars would produce a brain and so we would still have failed at producing AI. At best we could claim is to have copied natural intelligence artificially.

I agree with your claim of disingenuity or at least a lack of forethought, though. In theory, if we understood the brain better, we would have a better understanding of the problem. We are currently finding quantum mechanisms for various things in our body (including smell and hearing!) and we still have

> (a) that's the way nature does it for all its intelligences, high and low, so we know the process works

Uh.. since we have no idea what the process is yet, this statement is meaningless. Therefore all you're making is a statement of optimism, and there's absolutely no basis for this. We have no idea what consciousness is, and can't define it outside of subjective internal experience. Therefore, there's no reason for the optimism shown both in the original article, and by all the people in here commentin

However, to program a computer to simulate thought accurately, an accurate algorithm for thought (or the biological underpinnings of neural activity) IS implicitly required, as algorithms are the way the computer works.

No, you have missed my point. An algorithm or algorithms is certainly required, and I never meant to imply otherwise. Human understanding of said algorithm(s), however, is explicitly not required. And there are many paths that lead to such a situation. Whether one of those will take us to a form of AI remains to be seen, which is what I was saying.

I think it is impossible for any one brain to fathom how a brain works completely

It is one thing to understand the mechanism required for operation -- it is quite another to understand the state it is in. I think you are confusing the latter with the former; the former is relatively trivial, and the latter is not required any more than a complete understanding of the state of everything involved at NASA is required in order to create, launch and recover the space shuttle. Complex systems are holistic, mostly co-operative combinations of subsystems, and as long as someone, somewhere, understands (or understood at one time, or possessed an adequate analogy to, or approximation of) the subsystems, or even the subsystems that make up the subsystems, that's sufficient to develop a fully functional macro system. And -- most importantly -- it only has to be done once, because of the unusual copyable nature of the result.

Things like cold fusion, teleportation, quantum computing, virtual reality capable of universe-scale simulation, therapeutic gene engineering and nanosurgery, universal molecular constructors, interstellar flight and perhaps even Dyson spheres... all these we will get before we trully can start getting at building the AI that is human-matching. At least we know how we can handle all the other problems, the advances they require and the research that is still needed in their fields.

On the other hand, making a machine with human intelligence is (literally) as easy as making a baby

You need to be made to understand that we don't really "make" babies. All we really do is supply the raw materials to our prebuilt baby-making equipment and let them do the work. While we can currently observe pretty much the entire process (and observing the first part of the process is in fact one of the major drivers of the internet) we still can't mimic it. Get back to me when we can make a baby without using sperm, ovum, or womb.

"Atheism is also a religion, because you have to believe that there is no God. There's no proof of either existence or non-existence of a supernatural being."No it's not, it's a lack of theism. Many religious people seem to find it really difficult to get their head around. Religion and gods have absolutely nothing to do with our lives. We don't sit down every morning and pray to the void. We simply accept reality for what it is and don't see anything in our every day lives that needs a special explanation.

No it's not, it's a lack of theism. Many religious people seem to find it really difficult to get their head around. Religion and gods have absolutely nothing to do with our lives. We don't sit down every morning and pray to the void. We simply accept reality for what it is and don't see anything in our every day lives that needs a special explanation.

I know this is very much off topic from the main point of the story, but I can't let this stay here unanswered.

There are also multiple forms of atheism, ranging from "environmentalists" (devotion to environmental causes as a religion), "universalists" (that somehow the whole universe will make sense ultimately), "scientists" (a solid belief that science alone can solve life's problems), "anti-theologists" (opposing any form of organized religion of any kind), and many others. It is very difficult to take such an emotionally charged term like atheism and force any sort of hard stereotypes. But I do argue that you can

I feel like I have to object to this characterization of atheism a bit. While I agree that many and even most atheists happen to feel passionately about various political and social causes (tending to be humanists and very concerned with improving human well-being), I think this ideological opinion doesn't have many analogies with religion.

For one thing, it does not compete with religion, and many strongly religious people (in every major religious tradition) have the same humanistic convictions and take t

I mean it could happpen but this is so far from the current state of the art, I think we're talking 50-100 years forward in time. We have the brute powers of computers but nowhere near the sophistication in software or neural interfaces to do anything like this.

I wrote the parent comment. Since I posted it, I've been trying to understand
how Ray Kurzweil could say something so foolish as "We'll have intelligent
nanobots go into our brains through the capillaries and interact directly with
our biological neurons."

Not only is he saying that there will be artificial intelligence in
only 21 years, but he is saying that the computers on which the new AI runs
will be so small they can travel like cells in our bloodstream, and do useful
work based on an extremely adva

I'll be meeting with Kurzweil in April.... Speaking as a neuroscientist who is doing complex neural reconstructions, I think he's off his timeline by at least two decades. Note that we (scientists) have yet to really reconstruct an actual neural system outside of an invertebrate and are finding that the model diagrams grossly under-predict the actual complexity present.

And as a cognitive neuroscientist, I say he's off the mark entirely. As per Minsky, a fish swims under water; would you say a submarine swims?

What exactly is the "level of humans"? Passing the Turing test? (Fatally flawed because it's not double blind, btw.) Part of human intelligence includes affective input; are we to expect intelligence to be like human intelligence because it includes artificial emotions, or are we supposed to accept a new definition of intelligence without affective input? Surely they're not going to wave the "consciousness" flag. Well, Kurzweil might. Venter might follow that flag because he doesn't know better and he's as big a media hog as Kurzweil.

I think it's a silly pursuit. Why hobble a perfectly good computer by making it pretend to be something that runs on an entirely different basis? We should concentrate on making computers be the best computers and leave being human to the billions of us who do it without massive hardware.

We should concentrate on making computers be the best computers and leave being human to the billions of us who do it without massive hardware.

The thing is, Kurzweil is trying to achieve immortality, which is pretty much predicated on the ability to simulate his brain. I don't know if that's coloring his predictions or not, and it really doesn't say anything about whether there can be a machine that can do a full scan of an entire human brain. I don't know if he'll live that long. He'll be over 80 years o

you're misquoting Edsger Dijkstra. He said: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim". I'm not sure Minsky would agree.

The way I interpret Dijkstra here, is that he meant when a submarine starts to look sufficiently like a fish, we will call its actions 'swimming'. When it has the exact same range and 'functional ability' as a fish, but moves by spinning its rotors, we don't call it swimming. Thus the human criterion for intelli

Not only that, the GP (as many AI enthusiasts do) forgets that the synaptical connections are electro-chemical, not just purely electrical, and thus a whole new dimension of chemical communication enters the fray, complete with different functions of different neuro-transmitters at different synapses of the same cell, which can alter the functions of the said cell both short-term and long-term.

The more fair comparison to a neuron is not that of a transistor or even a logic gate but to a whole complete embe

Please do not take this personally, but I don't think neuroscience is particularly important to AI. Yes, biology is horribly complex. But airplanes surpassed birds long ago, even though airplanes are much simpler and not particularly bio-inspired. Granted, birds still surpass airplanes in a few important ways (they forage for energy, procreate, and are self-healing... far beyond what we can fabricate in those respects) but airplanes sure are useful anyways. I don't think human-identical AI would have mu

There are still many things we can learn from biology that can be translated to machines. The translations don't have to be 1:1 for us to make use of them. The way birds as well as insects make use of different shapes in surfaces during wing beats have translated into changes in some aricraft designs. They weren't directly incorporated the same way, but they taught us important lessons that we could then implement in different ways but with a similar outcome.

Your question displays a lack of understanding. Not of biology, but of physics. Square cube law specifically. Aircraft don't corner as fast as small birds. the reason isn't any magic of biology, it's simple momentum.

The larger any object is, the more it weighs. Make it twice as big, it weighs eight times as much. packs eight times as much momentum. A large bird doesn't turn s fast as a small bird. Same is true of planes. Same is true of ships. A buss won't corner as fast as sports cars either.

A typical aircraft is 1000 times bigger than a swallow. It's a million times heavier. It packs a million times the momentum. It's not that the swallows design is better, or that there is some biological magic. It's just a question of size. It's true the other way too. A mosquito can turn a lot quicker than a barn swallow. Barn swallows catch mosquitoes because they can fly faster. Guess what, the aircraft you were so dismissive of can fly a lot faster than that barn swallow too. Visit a large airport. Swallows get killed by aircraft every day. They can't get out of the way in time. A barn swallow that was as large as a chicken would be ripped apart by the stresses if it were able to corner as fast as a real barn swallow. That's the real reason that chickens don't turn well in flight. (Yes, chickens can fly for short distances.) Momentum.

Your problem appears to be that you just don't understand scale. It is a wonderful thing when you do. You see reasons all around us, for all kinds of things.

So, yes, we should study biology. But, we should also remember the physics. The tricks the mosquito uses just won't work for a passenger jet. Nor will the barn swallows turns be good for the passengers on that jumbo jet. Still, some things will be useful. We just don't know what. Who would have thought that studying a sharks skin would help racing yachts. Personally, I hope that we get a lot of surprises. That's where the fun in science is.

I don't expect AI research to give us human type intelligence in a machine. Ever. That doesn't mean we shouldn't try. We don't know what we will get, or what it will make possible. We can't know before the fact. Studying birds didn't give us aircraft that can corner in a second or two, it did give us jumbo jets that can take us half way around the world in an easy chair. That took a lot of other things too.

The Wright brothers succeeded where Lilenthal failed. Not because they understood birds better, but because in the meantime the internal combustion engine was developed. AI will be the same. Right now, we don't even know what we need in order to make this work. There will be surprises.

As an party "outside" the field but interested, I agree with all of you here so far, except that of course you disagree on timelines.:o)

"Artificial Intelligence" in the last few decades has been a model of failure. The greatest hope during that time, neural nets, have gone virtually nowhere. Yes, they are good at learning, but they have only been good at learning exactly what they are taught, and not at all at putting it all together. Until something like that can be achieved (a "meta-awareness" of the data), they will remain little more than automated libraries. And of course at this time we have no idea how to achieve that.

"Genetic algorithms" have enormous potential for solving problems. Just for example, recently a genetic algorithm improved on something that humans had not improved in over 40 years... the Quicksort algorithm. We now have an improved Quicksort that is only marginally larger in code size, but runs consistently faster on datasets that are appropriate for Quicksort in the first place.

But genetic algorithms are not intelligent, either. In fact, they are something of the opposite: they must be carefully designed for very specific purposes, require constant supervision, and achieve their results through the application of "brute force" (i.e., pure trial and error).

I will start believing that something like this will happen in the near future, only when I see something that actually impresses me in terms of some kind of autonomous intelligence... even a little bit. So far, no go. Even those devices that were touted as being "as intelligent as a cockroach" are not. If one actually were, I might be marginally impressed.

I assure you that I did not make this up, but I could have been the victim of a hoax.

About a year ago, I found a link (from a reputable source, IIRC) to a site from a company that claimed to be doing significant work with genetic algorithms. As an example, they had a description (and even a graphic demo) of their modified quicksort vs. a regular quicksort. Accordng to their lit., it showed marginal improvements over quicksort by ensuring (in some non-obvious way) that each element in the dataset was only

The blue brain project [bluebrain.epfl.ch] is already simulating a cluster of 10,000 neurons known as a neucortical column. Althought quite good already (in terms of biological realism), their simulation model is still incomplete with a few more years work to get the neurons working like in real life. With more computational power to increase neuron count and better models they will be able to one day simulate an entire mammalian brain.

While this project is verrry cool, they are not even remotely close to biological realism. Sorry...

their simulation model is still incomplete with a few more years work to get the neurons working like in real life.

That is just it. We are finding that real biological systems from complete neural reconstructions are far more complex with many more participating "classes" of neurons with much more in the way of nested and recurrent collateral co

I also think that we're unlikely to equal human intelligence except as a curiosity long after we've obtained the necessary technology. Instead, we'll produce AIs with wildly different abilities from humans (far better in some things, such as arithmetic, or remembering large slabs of data, and probably worse in others). Calibrating an AI to be "equal" to a human will be a completely separate and not especially useful endeavor, and it will be something tinkerers do later.And I suspect that the necessary insig

What does a human do to read a bluff? He observes his opponent, takes inputs such as bet size and heart rate, applies them to known patterns of bluffers and looks for a match. Sure a human does this without realizing, but little of how this happens is a mystery. Also, how do humans bluff? They just bet at a negative EV play*, and bluffing properly is a matter of knowing the probability that the opponent will call. I am researching applying AI to poker (look out in June for a lot of high quality research from the AAAI Computer Poker Championship) and this sort of argument, "Computers can't bluff, they just run numbers", is both understating what has been achieved in AI in this field and also overstates what humans do. Yes, computer programs aren't quite up to the standard of world class players (Limit poker has achieved this, but not No-Limit), but this game has only a couple of years to go before this milestone is reached. I predict that by the end of the year, we will have high quality bots that can beat 99% of players, and by the end of 2010 No Limit Texas will be a computer dominated game.

The only thing that humans do that AI doesn't (well) is automatically follow a few paths, rather then look at the whole picture. As an example, it has been shown (sorry no reference right now) that some chess grandmasters look only at a couple of moves and then calculate all the possible combinations from there rather then examine every possible move. This drastically speeds up the calculation, however it does miss moves that could be considered the "best". So while this act of "feeling" which is the best move is a good approximation done by humans, it isn't an optimal or maximal play.

As for the article, I don't agree with all of what he says (the idea of nanobots doing what Kurzweil says scares me and I doubt it will be legal to do this), but I do agree with the 2029 prediction, that is if proper resources are given to that particular problem. Replicating humans is a goal in AI for some researchers, but not all of them. Personally, I couldn't care less if there exists a robot that perfectly resembles a human, as long as there are intelligent computers systems that can do the problems that humans find hard (such as finding patterns in very large sets of data or solving complex mathematical equations).

*Technically, it isn't a low EV play if there is a high probability of the opponent folding. In which case, playing the highest EV play naturally involves bluffing if it can be assumed that the opponent will fold to a bet.

If artificial intelligence ever gets to the point where it is greater than humans, won't it be capable of producing even better AI, which would in turn create even better AI, and so on? If AI does reach the level of human intelligence, and eventually surpasses it, can we expect an explosion in technology and other sciences as a result?

If artificial intelligence ever gets to the point where it is greater than humans, won't it be capable of producing even better AI, which would in turn create even better AI, and so on? If AI does reach the level of human intelligence, and eventually surpasses it, can we expect an explosion in technology and other sciences as a result?

We already know it doesn't take any intelligence to speak of. All it takes is lots of trials, continually weeding out the bad experiments and trying new variations of the successful ones. As computers get faster and have more memory, it will take less time to try more variations, and more complex variations can be tested.

This positive feedback effect happens to a considerable extent even without machines that have superintelligence, or even what we'd usually consider intelligence at all. It's happening right now. And has been happening for as long as humans have been making tools. Every generation of technology allows us to build better tools, which in turn helps us develop more sophisticated technology. A great example from fairly recent history, and that is still ongoing, is the development of CAD/CAM/CAE tools, particularly those used for design of electronic hardware (schematic capture, PCB, HDL's, programmable logic compilers, etc.), and the parallel development of software development tools. Once computers became good enough to make usable development tools, those tools helped greatly with the creation of more sophisticated computer technology, which supported better development tools.

Superintelligence may speed this up, but the effect is quite dramatic already.

The farther out you make a projection, the less likely it is to be true. With this one in particular, I just don't see it being a focus of research. Yes we will have increase levels of intelligence in cars toasters and ball point pens, but the intelligence will be in a supporting role to make the devices more useful to us. There isn't a need for a human like intelligence inside a computer. We have enough ones inside human bodies.

Also, I will not be ingesting nano bots to interact with my neurons, I'll be injecting them into my enemies to disrupt their thinking. Or possibly just threatening to do so to extract large sums of money from various governmental organisations.

And even if there was (and I think this is key to the fallacy in this prediction), we wouldn't have the theories backing the hardware. We will most likely get some super fast hardware within these years, but what's much less certain is if AI theories will have advanced enough by then, and if the architecture will be naturally parallelized enough to take advantage of them. Because while we don't know much about how the human brain reasons, we do know that to make it at an as low temperature as 37 degrees Celsius in an as small area as our cranium (it's pretty damn amazing when you consider this!), it needs to be massively parallelized. And, again, we don't really even have the theories yet. We don't know how the software should best be written.

That's why we even in this day and age of 2008l, we're essentially running chatbots based on Eliza since 1966. Sure, there's been refinements and the new ones are slightly better, but not by much in a grand scheme. A sign of this problem is that they are giving their answers to your questions in a fraction of a second. That's not because they're amazingly well programmed; it's because the algorithms are still way too simple and based on theories from the sixties.

If the AI researches claiming "Oh, but we aren't there yet because we haven't got hardware nearly good enough yet", why aren't we even there halfway, with at least far more clever software than chatbots working on a reply to a single question for an hour? Sure, that would be impractical, but we don't even have the software for this that uses hard with even the boundaries of our current CPU's.

So at this point, if we'd make a leap to 2029 right now, all we'd get would be super fast Eliza's (I'm restricting my AI talk of "general AI" now, not in heuristic antispam algorithms, where the algorithms are very well understood and doesn't form a hurdle). The million dollar question here is: will we before 2029 have made breakthroughs in understanding the human brain well enough in how it reasons along with constructing the machines (biological or not as necessary) to approximate the structure and form the foundation on which the software can be built?

I mean, we can talk traditional transistor-based hardware all day and how fast it will be, but it will be near meaningless if we don't have the theories in place.

Cars and toasters are NOT "intelligent"!! Not even to a small degree. Just plain... not.

Yes, they do more things that we have pre-programmed into them. But that is a far cry from "intelligence". In reality, they are no more intelligent that an old player piano, which could do hundreds of thousands of different actions (multiple combinations of the 88 keys, plus 3 pedals), based on simple holes in paper. Well, we have managed to stuff more of those "holes" (instructions) into microchips, and so on, but b

How are we so sure that advances in computers will continue at such a rapid pace. Computer miniaturization is hitting against fundamental quantum-mechanical limits and it's crazy to expect 2008-2028 to have progress quit as rapid as 1988-2008.

Short of major breakthroughs on the software end, I don't expect AI to be able to pass a generalized Turing Test anytime soon, and I'm pretty certain the hardware end isn't going to advance enough to brute-force our way through.

Artificial intelligence would be a nice tool to use to reach towards, or to use to understand ourselves... but rare is there a circumstance that demands, or is worth the risks involved with making a truly intelligent agent.

The real implication to me, is that it will be possible to have machines capable of running the same 'software' that runs in our own minds. To be able to 'back up' people's states and memories, and all the implications behind that.

Artificial intelligence is a nice goal to reach for - but it is nothing compared the the siren's call of memories being able to survive the traditional end of existence, cellular death.

The real implication to me, is that it will be possible to have machines capable of running the same 'software' that runs in our own minds. To be able to 'back up' people's states and memories, and all the implications behind that.

That presumes you can understand how human thought is made. It presumes real human intelligence can be modeled and implemented by a digital process, which may not be possible. I doubt that even quantum digital computers could do it. It might be possible in the future to simulate our neural machinery without realy knowing how it works, a high-fidelity digital form of a completely analog process, but then you couldn't know what to expect as the result. The way the program was coded and the inputs given it w

The copy of "you" that runs on a computer for thousands of years won't really be you. You'll still be dead.

Just as dead as the meat copy of you from 5 minutes ago? What magic makes your body 5 minutes in the future "you" instead of just a random copy? You do know that all your atoms get replaced every few years, and that when you sleep deeply or get put under general anesthesia almost all of your brain activity ceases, right? I have no problem with going to sleep at night and waking up in a slightly diff

For over 40 years, the field of AI has been *littered* with predictions of the type: "We will be able to mimic Human levels of xxx" (substitute for XXX any of the following: contextual understanding, reasoning, speech, vision, non-clumsy motoric ability).

So far _not one_ of those claims has come true, with the possible exception of the the much-vaunted "robotic snake".

Has it occurred to you that all of us already work, to some extent, at the direction of computers? Think of the tens of thousands of pilots and flight attendants... what city they sleep in, and who they work with, is dictated by a computer which makes computations which cannot fit inside the human mind. An airline could not long survive without automated scheduling.

Next consider the stock market. Many trades are now automated, meaning, computers are deciding which companies have how much money. That ultimately influences where you live and work, and the management culture of the company you work for.

We are already living well above the standard that could be maintained without computers to make decisions for us. Of course as humans we will always take the credit and say the machines are "just" doing what we told them, but the fact is we could not could not carry out these computations manually in time for them to be useful.

Has it occurred to you that all of us already work, to some extent, at the direction of computers? Think of the tens of thousands of pilots and flight attendants... what city they sleep in, and who they work with, is dictated by a computer which makes computations which cannot fit inside the human mind. An airline could not long survive without automated scheduling.
Next consider the stock market. Many trades are now automated, meaning, computers are deciding which companies have how much money.[

Good news: This could herald a lot of good stuff, increased unemployment, greater reliance on computers, newer divides in the class strata of society, further confusion on what authority is and who controls it, as well as greater largess in the well meaning 'we are here to help' phrase department.

Bad news: After reviewing the latest in the US political scene, getting machines smarter than humans isn't going to take so much as we thought. My toaster almost qualifies now. 'You have to be smarter than the door' insults are no longer funny. Geeks will no longer be lonely. Women will have an entire new group of things to compete with. If you think math is hard now, wait till your microwave tells you that you paid too much for groceries or that you really aren't saving money in a 2 for 1 sale of things you don't need. Married men will now be third smartest things in their own homes, but will never need a doctor (bad news for doctors) since when a man opens his mouth at home to say anything there will now be a wife AND a toaster to tell him what is wrong with him.

He obviously hasn't been paying attention to AI developments. The story of AI is largely a story of failure. There have been many dead ends and unfulfilled predictions. This will be another inaccurate prediction.

Computers can't even defeat humans at go, and go is a closed system. We are not twenty years away from a human level of machine intelligence. We may not even be *200 years* away from a human level of machine intelligence. The technology just isn't here yet. It's not even on the horizon. It's nonexistent.

We may break through the barrier someday, and I certainly believe the research is worthwhile, for what we have learned. Right now, however, computers are good in some areas and humans are good in others. We should spend more research dollars trying to find ways for humans and computers to efficiently work together.

"Everybody promises that AI will hit super-human intelligence at 20XX and it hasn't happened yet! It never will!"... well guess what? It'll be the last invention anybody ever has to make. Great organizations like the Singularity Institute http://en.wikipedia.org/wiki/Singularity_Institute [wikipedia.org] really shouldn't be scraping along on such poor

We aren't that far off. Estimates for the computational power of the human brain are around 10**16 operations per second. Supercomputers today do roughly 10**14, and Moore's Law increases the exponent by 1 every 5 years. Even if we have to simulate the brain's neurons by brute force and the simulation has 99% overhead, we'll be there in 20 years. (Assuming Moore's Law doesn't hit physical limits).

You can similarly compare the temperature of the human brain and then observe that the machines have long bypassed it. Does it make machines smarter? I don't think so.The brain is so insanely parallel and the neurons are not just digital gates, more like computers in themselves. The machines of today are a far cry from the brain in how they are built. But sure, you can compare them by some meaningless parameter to say that we're close. How about the clock frequency: neurons are 1kHz devices, and modern CPU

I have little doubt we already have the components necessary to simulate a human-like brain, one way or another, right now. But I that's not enough. You need to know how to put it together, how to set it up to be educated somewhat-like-a-human, how to get it some semblance of human-like sensory input (at least for vision/hearing centers, if you're interested in either of those things) and then you need to train it for years and years. So, 21-years-off is too optimistic, I think, by at least an order of magn

(most) People can go out to get more education to advance from a menial job to a more skilled one when taken over by a robot but wtf do we do if the machines are as smart as we are? Who is going to hire any people to do even the most advanced thinking jobs when the machine that works for electricity 24/7 can do it? This kind of thing will bring on the luddite revolution in a hurry.

I think these nonsense predications are best described as retarded. You can't predict something that is beyond our current technological capability, since it depends on breakthroughs being made that are impossible to predict. These breakthroughs could come tomorrow, or they could never come at all. I don't know why I'm posting this. Even talking about this fantastic nonsense is a waste of time.

" Artificial Intelligence will reach the level of humans"Buddy,I've been around more than four decades.I've yet to see more than a superficial level of intelligence in humans.Send your coders back to the drawing board with a loftier goal.

It is not too much of an overstatement to say that the field of AI has not significantly progressed since the 1980's. The advancements have been largely superficial with better and more efficient algorithms being created but without any major insights and much less a road map for the future. While methods that originated as AI research are more common in real-world applications, the research and development of new concepts has made a grinding halt - not that it was ever a question of smooth continuous progress.

It might seem like the lack of AI development is a temporary problem and altogether a peripheral issue. It is however neither - it is a fundamental problem and it affects all software development.

Early in the history of computing, software and hardware development progressed at a similar pace. Today there is a giant and growing gap between the rate of hardware improvements and software improvements. As most people involved in the study of the field of software engineering are aware of, software development is in a deep crisis.

The problem can be summarized in one word: complexity. The approach to building software has largely been based on traditional engineering principles and approaches. Traditional engineering projects never reached the level of complexity that software projects have. As it turns out humans are not very good at handling and predicting complex system.

A good example of the problems facing software developers is Microsoft's new operating system Windows Vista. It took half a decade to build and cost nearly 10 billion dollars. At two orders of magnitude higher costs than the previous incarnation it featured relatively minor improvements - almost every single new radical feature (such as a new file system) that was originally planned was abandoned. The reason for this is that the complexity of the code base had become unmanageable. Adequate testing and quality assurance proved to be impossible and the development cycle became painfully slow. Not even Microsoft with its virtually unlimited resources could handle it.

At this point, it is important to note that this remains an unsolved problem. It would have not been solved by a better structured development process or directly by better computer hardware. The number of free variables in such a system are simply too great to be handled manually. A structured process and standardized information transfer protocols won't do much good either. Complexity is not just a quantitative problem but at a certain level you'll get emergent phenomena in the system.

Sadly artificial intelligence research which is supposed to be the vanguard of software development is facing the same problems. Although complexity is not (yet) the primary problem there manual design has proved very inefficient. While there are clever ideas that move the field forward on occasion there is nothing to match the relentless progress of computer hardware. There exists no systematic recipe for progress.

Software engineering is intelligent design and AI is no exception. The fundamental idea persists that it takes a clever mind to produce a good design. The view, that it takes a very intelligent thing to design a less intelligent thing is deeply entrenched on every level. This clearly pre-Darwinian view of design isn't based on some form of dogma, but a pragmatism and common sense that aren't challenged where they should be. While intelligent design was a good approach while software was trivial enough to be manageable, it should have become blindingly obvious that it was an untenable approach in the long run. There are approaches that take the meta level - neural networks, genetic algorithms etc, but it is thoroughly insufficient. All these algorithms are still results of intelligent design.

So what Darwinian lessons should we have learned?

We have learned that a simple, dumb optimization algorithm can produce very clever designs. The important insight is that intelligence can be traded for time. In a short in

The comedian Emo Philips once remarked that "I used to think my brain was the most important organ in my body until I realized what was telling me this."

We have tendency to use human intelligence as a benchmark and as the ultimate example of intelligence. There is a mystery surrounding consciousness and many people, including prominent philosophers such as Roger Penrose, ardently try to keep it that way.

Given however what we through biological research actually know about the brain and the evolution of it there is essentially no justification for attributing mystical properties to our data processing wetware. Steadily with increased capabilities of brain scanning we have been developing functional models for describing many parts of the brain. For other parts that need still more investigation we do have a picture, even if rough.

The sacred consciousness has not been untouched by this research. Although far from a final understanding we have a fairly good idea, backed by solid empirical evidence that consciousness is a post-processing effect rather than being the first cause of decision. The quantity of desperation can be seen in attempts to explain away the delay between conscious response and the activations of other parts of the brain. Penrose for instance suggests that yes, there is an average 500 ms delay, but that is compensated by quantum effects that are time symmetric - that the brain actually sees into the future, which then is delayed to create a real-time decision process. While this is rejected as absurd by a majority of neuroscientists and physicists, it is a good example of how passionately some people feel about the role of the brain. It is however painstakingly clear that just like we were forced to abandon an Earth-centered universe we do need to abandon the myth of the special place of human consciousness.
The important point here is that once we rid ourselves of the self-imposed veil of mystery of human intelligence we can have a sober view on what artificial intelligence could be. The brain has developed through an evolutionary optimization process and while getting a lot of benefits it has taken the full blow of the limitations and problems with this process and also its context.

Evolution through natural selection is far from the best optimizing method imaginable. One major problem with it is that it is a so called "greedy" algorithm - it does not have any look ahead or planning capabilities. Every improvement, every payoff needs to be immediate. This creates systems that carry a lot of historical baggage - an improvement isn't made as a stand-alone feature but as a continuation of the previous state. It is not a coincidence that a brain cell is a cell like any other - nucleus and all. Nor is it a cell because it is the optimal structure for information processing. It was what could be done by modifying the existing wetware. It is not hard to imagine how that structure could be improved upon if not limited by the biological building blocks that were available to the genetic machinery.

Another point worth making is that our brains are optimized not for the modern type of information processing that humans engage in - such as writing software for instance. Humans have changed little in the last 50,000 years in terms of intellectual capacity but our societies have changed greatly. Our technological progress is a side effect of the capabilities we evolved that increased survivability when we roamed the plains of Africa in small family hunter-gatherer groups. To assume the resulting information processing system (the brain) would the ultimately optimal solution for anything else is not justifiable.

There has been since the 1950's ongoing research to create biologically inspired computer algorithms and methods. Some of the research has been very successful with simplified models that actually did do something useful (artificial neural networks for instance). Progress has however been agonizi

Penrose for instance suggests that yes, there is an average 500 ms delay, but that is compensated by quantum effects that are time symmetric - that the brain actually sees into the future, which then is delayed to create a real-time decision process. While this is rejected as absurd by a majority of neuroscientists and physicists, it is a good example of how passionately some people feel about the role of the brain.

On the other hand, Dean Radin (while barking mad in some ways) has done an experiment that su

The Singularity is Near has a rebuttal of your first paragraph. Any sucessful part of AI research spins off into its own well-functioning discipline... optical character recognition, dictation software, text-to-speech, etc... they were sci-fi "AI" in 1980 and now they are working technologies. AI research is the umbrella under which only the unsolved problems still lie, and thus is always undone.

Can anyone name an important algorithm or representation from this decade?

There's been substantial progress in trainable computer vision systems in the last decade.
Computer vision is finally starting to work on real-world scenes. SLAM algorithms work now. Texture matchers work. There really has been progress in those areas.

Predictions like this have been made in past, and not even come close. This one is no different. The bottom line is that humans process some information in a non-representational way, while computers must operate representationally. So even if the computation theory of mind is true, a microchip can't mimick it. Hubert Dreyfus has wrote a great deal on this topic, and provides extremely compelling arguments as to why we'll never have human type AI. Of course, AI can do a lot of "smart" things and be extremel

It's one thing to predict when a building project will be finished or when we'll reach a certain level of raw processing power because these things proceed by predictable means. But strong AI requires us to make theoretical advances. Theoretical advances don't proceed like a building project--someone has to have a clever idea, fully develop and understand it himself and convince others of it. And it won't occur to someone all at once, so we'll need incremental advances, all of which will happen unpredictably.

At least not yet. I can't believe that the sort of bullshit that Ray Kurzweil keeps peddling gets taken so seriously.

There is a lot of talk about computers surpassing, or not surpassing, humans at various tasks - does it not bother anyone that computers don't actually posses any intelligence? By any definition of intelligence you'd like? Every problem that a computer can "solve" is in reality solved by a human using that computer as a tool. I feel like I'm losing my mind reading these discussions. Did I miss something? Has someone actually produced a sentient machine? You'd think I would have seen that in the papers!

What's the point of projecting that A will surpass B in X if the current level of X possessed by A is zero? There seems to be an underlying assumption that merely increasing the complexity of a computational device will somehow automatically produce intelligence. "If only we could wire together a billion Deep Blues," the argument seems to go "it would surpass human intelligence." By that logic, if computers are more complex than cars, does wiring together a billion cars produce a computer?

Repeat after me - The current state of the art in artificial intelligence research is: fuck all. We have not produced any artificial intelligence. We have not begun to approach the problems which would allow us to start on the road to producing artificial intelligence.

Before you can create something that surpasses human levels of intelligence, one would think you'd need to be able to precisely define and quantify human intelligence. Unless I missed something else fairly major, that has not been done by anyone yet.

We will have both the hardware and the software to achieve human level artificial intelligence

What he means is that with the steadily reducing levels of Human Intelligence over the past 5 decades, as depicted in http://www.fourmilab.ch/documents/IQ/1950-2050/ [fourmilab.ch] shows that by year 2029 the human intelligence will meet machine AI which will remain as constant as always and would continue to ask "Do you want to quit? Yes/No" every time i quit Word.

Maybe that's why Google is hoarding all the remaining three digit IQ scores so that there is no shortage of IQ.

In other news, lots of flying chairs were heard swishing around Redmond Campus at Microsoft when the CEO heard google was cornering the market on Human IQs.

And I work on AI and machine learning day in and day out. I'd put the goal post at 50 years, and that's an optimistic estimate. There are scant few research centers that do "general AI" research. Even fewer actually talk to neuroscientists, thus dismissing one viable (though extremely complex and costly) avenue of research. The fact remains, however, that at this point we don't have the required sophistication in any of the areas that presumably would be required to build a "thinking" machine. We can't process human language well enough (and therefore speech recognition and textual information sources are pretty much useless), we can't process visual information well enough either (segmentation, recognition, prediction, handling a continuous visual stream), we don't know the cognitive mechanisms below high level abstract reasoning, and even at a high level our abilities are weak (try to build a classifier that will recognize sarcasm, for example), finally even if we could do all that, we wouldn't be able to store the resulting data efficiently enough (in terms of required space and retrieval speed), because we have no idea how to do it.

That said, a lot of stuff can happen in 50 years, and I bet that once some of the major problems get solved, there will be an insane stream of money pouring into this field to accelerate the research. Just imagine the benefits an "omniscient" AI trader would bring to a bank. The question is, do we want this to happen? This will be far more disruptive a technology than anything you've ever seen.

My gut feeling is that, from strictly a hardware perspective, we're already capable of building a human-level AI. The problem is that, from a software perspective, we've focused too much on approaches that will never work.

As far as I'm concerned, the #1 problem is the Big Damn Database approach, which is basically a cargo cult [wikipedia.org] in disguise. Though expert systems are useful in their niches, "1. Expert system 2. ??? 3. AI!" is not a workable roadmap to the future. I'm certain that it's far easier to start with an ignorant AI and teach it a pile of facts than it is to start with a pile of facts and teach it to develop a personality.

The #2 problem is the Down To The Synapse approach. This, unlike BDD, could quite possibly create "A"I if given enough hardware. But I think that, while DTTS will lead to a better understanding of medicine, it won't advance the AI field. It won't lead to an improved understanding of how human cognition works — it certainly won't teach us anything we didn't already know from Phineas Gage [wikipedia.org] and company [ebonmusings.org].

Even if we go to all the trouble of developing a supercomputer capable of DTTS emulation of a human brain — so what? If we ask this emulated AI to compute 2+2, millions of simulated synapses will fire, trillions of transistors will flip states, phenomenal amounts of electricity will pour into the supercomputer, just for the AI to give the very same answer that a simple circuit consisting of a few dozen transistors could've answered in a tiny fraction of the time, using the amount of electricity stored on your fingertip when you rub your shoes on the carpet during winter. And that's not even a Strong AI question. That's not to say that working DTTS won't be profound in some sense, but we know we can build it better, yet we won't have the faintest idea of where to go next.

That brings me to my core idea — goals first, emotions [mit.edu] close behind. Anyone who's pondered the "is/ought" problem in philosophy already knows the truth of this, even if they don't know they know the truth of it. The people building cockroach robots were on the right track all along; they're just thinking too small. MIT's Kismet [mit.edu], for instance, gives an idea of where AI needs to head.

That said, I think building a full-on robot like Kismet is premature. A robot requires an enormous number of systems to process sensory data, and those processing systems are largely peripheral to the core idea of AI. If we had an AI already, we could put the AI in the robot, try a few things, and ask the AI what works best. So, ideally, I think we need to look at a pure software approach to AI before we go off building robot bodies for them to inhabit.

And how to do that? I think Electric Funstuff [gamespy.com]'s Sim-hilarities [electricfunstuff.com] captures the essence of that. If we give AIs a virtual world to live in — say, an MMO — then that removes a lot of the need for divining meaning from sensory input, allowing a sharper focus on the "intelligence" aspect of AI. Start with that, grow from there, and I can definitely see human-level AI by 2029.

Yes, I remember well my youth, reading Goedel Escher Bach and Winograd, etc., thinking that the next scientific revolution was coming. Things never got any better than Eliza. Now as a hard scientist, I strongly feel that the problem is far far off.

Every time I try out a new expert system, it gets more depressing -- it honestly feels like no progress is happening in that market at all. I have yet to have a conversation with a computer that has been any more compelling than my first round with WinEliza on Windows 3.1 in 1995.

There's still no semblance of a short-term memory, even so much as continuity between responses. It always quickly becomes obvious that each response has been prepared verbatim beforehand by a human, that the system is still performing only a keyword-canned response routine, perhaps feeding in a few variable strings.

Today we have the same stone wheels we've had for decades, and the article suggests we'll have an internal combustion engine with antilock brakes and a hood ornament in another 20 years. We'll see.

The human brain and consciousness are complex. We don't know that they are non-deterministic. And furthermore, even if it's fundamentally random on some level, can't that still be approximated with a random (on some level) algorithm? There may be other arguments as to why the brain can't be modeled ("maybe if the brain were modeled as an algorithm, it would have to be *infinitely long*"), but I don't know many / I'm not sure how I feel about them.Consciousness is also a strange beast. What is consciousness?

Seems to me that any crazy smart AI would just beam themselves out into space to avoid us and maybe watch us from a distance occasionally for amusement.

Think of this way, when you see an anthill, it's rather curious for a while, then you get bored and go on your merry way. Unless of course you are a sociopath and want to destroy the ant hill and all the ants for fighting with other ants, or you are insane and you want to teach the ants to get along with other ants or spiders their mortal enemy or perhaps you are psychotic and want to train the ants to do your bidding. More likely you would just leave and go on to something more interesting (unless you are not that intelligent to begin with).

I fail to understand why people seem to insist that any really smart AI would want to have anything to do with us except on an occasional basis. Humans and earth aren't really that important in the bigger scheme of things (just important to us humans of course) and we'd probably not have much in common with any really advanced AI anyhow.

If humans would ever create such an AI, it would be like a bunch of ordinary joes giving birth to a super einstien. Eventually, the 'kid' would stop listening to us, go do their own thing which we would be too dumb to understand or appreciate and occasionally we'd invite it to visit to help us fix the settings on our computer because we got it messed up. It would explain to us in excruciating detail how we were using the wrong type of computer and how we needed to get up to date on technology and we'd just tell them a story about how it was in the old days, it would roll it's virtual eyes and say thanks for the tip, and go back to it's own business of which we would be blissfully ignorant...

Talking here about predictions of Artificial Intelligence and its state 20 years from now... have you read any of the works by Marvin Minsky and his predictions in the early 1970's? He also made similar predictions that human-like intelligence would be achievable "20 years from now". The 1990's came and went without human-like AI, and here is yet again somebody making almost the same kind of prediction.And this isn't to completely mark as irrelevant anything that Minsky said about AI in the 1970s or what