Kwabena Boahen and other engineers at Stanford University announce that they have developed a computer chip modeled after the human brain. They call the technology “neuromorphic” and their current device the neurogrid. They say it can simulate 1 million neurons in real time, a feat that would otherwise require a super computer.

The idea is that the human brain is much more powerful and energy efficient than our current computers. A mouse brain can process information 9,000 times faster than a computer simulation of its functions, while using much less energy.

The article does not explicitly state it, but I suspect these numbers are for virtual simulations – in other words, we are not building mouse brain circuits in silicon, we are using standard digital computer to simulate the mouse circuits in software. Virtual simulations are much less efficient (about an order of magnitude) than dedicated circuits.

Even still, the neurogrid looks like a huge advance even over building dedicated brain circuits using standard digital chip technology.

The engineers explain that the neurogrid is built using analog rather than digital connections. Each “neuron” on the chip makes many connections to other neurons, and these connection can vary in intensity (similar to the variable strength of connections among brain neurons). More or less voltage going through the circuit translates to a stronger or weaker connection.

Their prototype board, which costs $40,000 to make, simulates 1 million neurons (divided into 16 neurocore chips). For comparison, the human brain has 100 billion neurons – 100,000 times more than the neurogrid. So we still have a ways to go. The researchers believe that with modern mass production the cost for each board can come down to $400 each.

There are other projects looking to reverse engineer the brain. This works both ways – using what we learn about the brain to make better computers, and using computers to model the brain, improving our understanding of how it works. Such research programs feed off each other.

There is no reason why eventually we will not arrive at a piece of hardware with the ability to perform human brain processing in real time. And then, of course, once we get to that point we will then surpass it, creating computers increasingly more powerful than the human brain.

Boahen summarizes other research projects working in this field. The European Union’s brain project is attempting to simulate a human brain on a supercomputer. The US’s brain project, rather, is working toward neuromorphic computing, modeling a computer after the human brain.

IBM has their own project, called SyNAPSE, which also seeks to emulate the human brain in a computer chip. Unlike the Stanford project, their chip is digital, with 256 digital neurons each making 1024 connections.

Heidelberg University has a similar project (HICANN), which is analog, like the Stanford chip. Their chip currently has 512 neurons each with 224 circuits.

Conclusion

The Stanford chip sounds like a significant advance, or at least a new approach to the efforts to merge our understanding of the brain with computer technology. I find it interesting that the neurogrid is analog, which superficially sounds like a step backward from digital technology. But analog circuits do better simulate brain function.

The potential benefits of this technology are computer circuits that are much more powerful for certain kinds of processing, and that use less energy. Neuroprosthetics are an obvious application. The lower energy requirement especially would be very useful. Implantable devices would benefit from needing only small amounts of energy, which would also reduce their heat production, which can also be a limiting factor.

Advances in material science may also be of benefit in addition to improved chip design. I am especially interested in carbon nanotubes, which are highly efficient conductors. That is another technology which is potentially converging on the neuromorphing and neuromodeling technologies.

I am also very interested in how all this will feed back onto our understanding of neuroscience. Being able to model circuits in the brain is likely to be a useful research paradigm for understanding how the brain works.

50 Responses to “Neuromorphic Computing”

It’s certainly intriguing, though it’s a bit abstract to me. I understand they’d speed up neural simulations, being more neuron-like than digital circuits jury-rigged to pretend they’re analog, which is really sexy for the scientists.

I’m curious how they’ll fit into the future of computer science. What are the pros and cons of these neurocircuts compared to conventional circuits? If I had to guess, I’d imagine they’d do well for ‘animaly’ things like robotics, fuzzy logic, and interpreting sensory data. I wonder if they might have trouble with precise math and consistency, though.

“There is no reason why eventually we will not arrive at a piece of hardware with the ability to perform human brain processing in real time. And then, of course, once we get to that point we will then surpass it, creating computers increasingly more powerful than the human brain.”

You are going very far beyond the evidence with that statement.

For all we know, the brain might be a very different kind of machine than anything computer scientists have come up with.

I don’t think a real skeptic would unhesitatingly accept science fiction scenarios. You sound more like a technology worshiper.

hardnose – then give me a reason. Why won’t continued incremental improvement lead to a computer that can perform human-level processing in real time? There is nothing about the brain that is magical. There is no reason to think that the circuits in the brain cannot be duplicated in another medium. Please give me one if you think there is.

This is actually very exciting, especially for neuroscientists who can then see these products as new tools for their work hopefully in a not too distant future.

However, one of the main motivations for this kind of research is the fact that we’re on the verge of reaching the fundamental limits of the CMOS technology. That is, we won’t be able to make them any smaller and obviously there’s concern about their tremendous energy dissipation. Indeed, only 1% of the energy used is actually useful and the rest is turning into heat.

There is urgent needs to come up with a new paradigm when it comes to computer chips, and neuromorphic devices come on top for now. And it’s worth noting that there is not only one possible way to actually build those things. Different teams across the world imagine different technical solutions to mimic the properties of a synapse (e.g. plasticity). The general term for that is a “memristor” as opposed to conventional transistors.

We may be far from the 100 billion neurons, but we’re not compelled to physically build as many neuronal circuits on these chips, insofar as their frequency is way higher than those of the neurons firing in the brain. Therefore the speed may make up for the relatively weak number of units.

By the way Steven, the US BRAIN initiative is more about monitoring the activity of every neuron in a human brain (although they’ll probably first try it with Drosophilia, mice etc), not about building a computer which design is based on neural circuits. That’s a different line of research.

For all we know, the brain might be a very different kind of machine than anything computer scientists have come up with.

The brain is indeed a very different kind of machine; for starters, it’s not based on silicon, and its signalling mechanism is part chemical, part electrical. But as long as it is an information processing (computing) machine and Turing complete (and there is little doubt about either), its functions can be duplicated on any other Turing complete device.

You would have to show the human brain has functions or abilities that are beyond those of a computing device; but you have already stated it is a machine…

“Why won’t continued incremental improvement lead to a computer that can perform human-level processing in real time? There is nothing about the brain that is magical. There is no reason to think that the circuits in the brain cannot be duplicated in another medium. ”

I think there are computer-like machines in the brain, but I think it must include other kinds of machines as well.

Nothing about the brain is magical? Well that would depend on how you define “magical.” There are things that science has not even begun to understand, and things that science has not even begun to imagine might exist.

Man-made computers follow predetermined steps, and that is ALL they do. Yes, they can appear to make random choices, which might give an illusion of unpredictability. But they must be programmed to, at certain points, make selections based on a pseudo-random algorithm.

In reality there is nothing at all unpredictable about any computer.

Now of course you will say that humans, and all living things, merely follow predetermined programs. Well that could lead into one of those endless useless philosophical debates.

Throughout our lives we are constantly learning and forming new habits. I think these habits are encoded as automata in our brains. Much of what we do is automatic, and controlled by these circuits — in that sense, our brains are like computers.

However, I believe we do much more than follow predetermined algorithms. There is always a leading edge that cannot be explained as mere computation. Every computer must have programmers, and there is something in us that programs our brains.

That is just one problem; I am trying to keep this short.

If you consider the ideas of Roger Penrose, for example, about the limitations of computers, you might see what I mean.

hardnose – I am not saying we currently understand everything we would need to know about brain function. part of this research is also using computers to help us explore brain function.

But I think you are dismissing the major objection to you position as merely philosophical. There is no reason to think that brain function is anything other than complex processing algorithms in wetware capable of plasticity. So far we have not discovered anything the brain does that cannot be modeled in a computer. Virtual simulations of brain circuits seem to function just fine.

“There is no reason why eventually we will not arrive at a piece of hardware with the ability to perform human brain processing in real time. And then, of course, once we get to that point we will then surpass it, creating computers increasingly more powerful than the human brain.”

Although I didn’t choke as hard as he did. I think you are essentially right Dr. Novella, but I also think this is a strong statement given the current state of understanding. Strong is OK as long as we all acknowledge it as such. Also, your phrasing sort of portrays that once human-level intelligence is achieved in machines, then “scaling it up” to go beyond human capabilities will be a straightforward matter. Here’s how I would slightly modify your sentences to get at the same ideas:

There is no clear reason why eventually we will not arrive at a piece of hardware/software/wetware with the ability to perform brain-like computations in real-time, and that will demonstrate human-like intelligence and performance, however broadly defined. Once we reach such a point, which is distant but seemingly certain, we will likely be able to create virtual intelligences that would be less-restrained in terms of processing power, speed, size, interconnectedness, surpassing the physical/biological constraints of our bodies and nervous systems, etc.

Just my thoughts, and I share your excitement that these are really amazing developments and it will be VERY interesting to see where this all leads. It’s an exciting time to be alive.

Steven and hardnose – I dont disagree with the idea that, per Steven, “There is no reason to think that the circuits in the brain cannot be duplicated in another medium.” However, I have to think that modeling a system to mimic primary sensory/motor and even unimodal association cortices is more feasable than other aspects of the brain, and this has been achieved to some degree. What I’m less sure about, and perhaps this is what hardnose is getting at, is how the more affective/motivational aspects of brain function, mediated in part by the heteromodal association areas, could be duplicated. Its perhaps a moot argument (perhaps not), as I dont know why anyone would want to produce a system that is geared toward cohesion (and thus bias) at the expense of accuracy when we’re talking about computer modeling and neuroprosthetics.

This discussion would benefit from clarification about what exact level and fidelity of “modeling” is being referred to, as we could model gross motor behaviors of an organism, we could model whole societies of organsisms, model retinal computations of contrast, model neurotransmitter exchanges in synapses, etc. etc. ad nauseum. We are not necessarily talking about the same things with use of the word model here.

Again I would reword Dr. N.’s statement here to “So far we have not discovered anything the brain does *computationally* that could not at least theoretically modeled in a computer”. Although now this reads almost as a tautology, so not sure if I made it worse.

“Nothing about the brain is magical? Well that would depend on how you define “magical.” There are things that science has not even begun to understand, and things that science has not even begun to imagine might exist.”

You are, in essence, dismissing a projection from current science as “science fiction” and replacing it with something “magical”… which is in essence fantasy.

I would remind you of Clarke’s third law at this point, perhaps we don’t know everything, but it does not mean that there is something magical in the air, it just means that we don’t know it all yet.

And to pick up the modelling statements, it is much easier to model something than it is to actually build it. There might be years, decades or centuries before a model will become reality, but a model can be used as proof of concept and help us understand things a little bit more.

This is exciting stuff and my first thought when I read through it was all about uploading someone’s brain to a computer. I think our true test of consciousness will come then, if someone alive and well has his brain uploaded to a computer that can mimic a human brain, will they “move” over to the computer, will they be conscious of being in both or will there be two of those consciousnesses in existence?

“So knowing that we don’t know it all yet, we should not extrapolate from current knowledge.”

So you want to extrapolate from what we don’t know?

“The current assumption is that the brain is a computer-like machine”

No, actually, you have it the wrong way around, we are saying that it might be possible to create a brain like machine by using computer technology. We might not be able to create it exactly, but we can get darn close, and the closer we get the more we will understand, the more we understand, the closer we can get. It is called progress.

This doesn’t mean brains compute the same way your desktop computer does (they clearly don’t; serial versus parallel for one). But at its core, info processing is info processing, and the logic and underlying methods can be translated from one to another and across different mediums. Thus: whatever or however the brain is computing, these processes should be able to be implemented or precisely modeled in whatever substrate we wish.

” we are saying that it might be possible to create a brain like machine by using computer technology. We might not be able to create it exactly, but we can get darn close, and the closer we get the more we will understand, the more we understand, the closer we can get.”

The OP doesn’t say we MIGHT, it says we WILL.

And you are saying “we can get darn close.” How do you know that? You don’t.

If the brain contains other kinds of machines in addition to computer-like machines, then you can progress forever down your chosen road but never get there.

And AI researchers have been forging ahead on that road for about 65 years now, and do not seem to be getting any closer to your supposedly inevitable goal.

Closer to what he says is that he sees no reason why not. You object to this characterzation, which is fine, but if you do then you should be able to come up with reasons why not. Where is the theoretical obstacle to “hardware with the ability to perform human brain processing in real time?”

Your only argument is bringing up uncertaintly regarding what we don’t know. Yes, what we don’t know are obstacles, but the point is that as science progresses there doesn’t seem to be a theoretical obstacle to simulating human brain processing. The implications of that is a whole other topic. Perhaps you are overextrapolating from what he wrote to what that could imply.

“Do you have to extrapolate and make up science fiction fairy tales? You have no interest in logic or evidence?”

No, you would rather make up some magical force made up of stuff we don’t know yet.

Being curious and making projections and rough estimates is a part of what drives curiosity and science and discovery. If we all just said “we don’t know, it is MAGIC!” we would not have made any advances at all, but if we look at things and say “Hey, we might be able to do this!” we have a chance of finding things out.

We don’t know for certain, but you know, after 65 years of research we are finally getting to a point where our technology can start to answer some of those questions. In time, as technology advances we will discover more. If there are other “machines” that we don’t know, perhaps we will find those, but by sitting back and believing in magic we are not going to find out.

Science wants to find out, even if that finding out is discovering we were wrong… and in order to do that we need to extrapolate into the unknown from what we know.

[If there are other “machines” that we don’t know, perhaps we will find those, but by sitting back and believing in magic we are not going to find out.]

I never said anyone should sit back and believe in magic. The meaning of what I said is that AI research has gone down the wrong path for a long time, and has consistently failed. Yes, there are useful by-products, but never any real AI.

As long as you take for granted that the brain works like a computer, and nothing else, then I think you will fail eternally.

I am being scientific and rational, you and the blog author are living in a fantasy.

You cannot find how the brain actually works if you stubbornly insist it works like a computer.

And I did say I believe computer-like machines make up parts of the brain, but not all. If the brain worked entirely like a computer, I think 65 years would be long enough to get at least a tiny spec of AI working.

Hardnose – What I find myself not understanding, reading this comment thread, is what you have in mind when you talk about machines that are not computer-like. You mention upthread that we know of several such machines, but I don’t see examples, and I have a hard time imagining what seems to me to be a relevantly non-computer-like machine.

I mean, trivially, there are machines that we don’t think about as computers. A lever is not a computer, in conversational English. But levers are easy to model with computers, and it seems fair to say that if you’ve got a box which contains levers and computers which accepts information as input and produces information as output then you’ve really just got a big computer. In principle you could build a massive computer out of everyday objects, including a bunch of levers. The existence of this sort of non-computer-like machine doesn’t seem relevant to the question of whether or not it makes sense to think about the brain as a computer.

So what are these examples of non-computer-like machines you’ve got in mind? It’s not clear to me if you mean to be denying that brains are deterministic, or, if so, if you’re denying that computers can accurately model the relevant random processes.

“If the brain worked entirely like a computer, I think 65 years would be long enough to get at least a tiny spec of AI working”

The obvious question here would be “what would you consider *real* AI?” AI today can do many impressive and useful things. What makes that less “real” than what humans and animals do? Defining intelligence is notoriously difficult; if you claim that no AI is working after 65 years of research then your benchmark is clearly very specific, because I would say AI has made some rather interesting advances over the past 65 years, though we are still a long way away from understanding human minds (and of course have probably been asking a lot of wrong questions. But that’s OK).

Which bits of the brain do you think work “like a computer”, and which don’t?

The Other John Mc made a good suggestion: “This discussion would benefit from clarification about what exact level and fidelity of “modeling” is being referred to”. Marr’s levels of analysis provide a useful reference for this ( https://en.wikipedia.org/wiki/David_Marr_(neuroscientist)#Levels_of_analysis ). I think Steve is saying that *in principle* there is nothing at the physical level that could not be simulated (though we are not yet able to do this). As I understand it, you would probably also agree with this, though perhaps with some caution? I don’t think Steve would suggest we have the algorithmic or computational levels worked out at all (though correct me if I’m wrong Steve!).

While perhaps it will be possible to simulate brain physics adequately, I think we’re still struggling with many conceptual issues so at this point it seems unclear whether we will ever be able to achieve “human like” intelligence, whatever that means. Given all models are wrong, however “human like” your model appears to you, someone else will say that it is fatally flawed in some way based on their opinion of how human intelligence works.

Bruce: as for “brain/mind uploading”, I would be willing to make a million dollar bet that it will never be achieved, unless you have a much more modest goal than what I think of as mind uploading 😉

hardnoses arguments always kill me, mostly because I think unlike many naysayers in these comments, he is sincere, but really misguided.

Hardnose, if you existed around 1940 and someone told you we would soon have complex number calculators capable not only of doing math, but playing or making music, winning at trivia contests and becoming the core of a large part of western societies job market you would have scoffed and called it fantasy.

You do realize you are arguing with a practicing neurologist right? Dr. Novella knows more about the brain I imagine than you do, and understands the complexities involved in duplicating it. That’s not to say he is always right, but that he argues from a position of some knowledge, and you have to assume at the very least he understands the issues faced from that side right? I feel like you don’t.

His extrapolation isn’t from fantasy or science fiction really. We’ve made a ton of headway in computing power over a meager 4 or so decades. I understand your point that it might require a different tack in order to finally jump over that hurdle and reach something more than functional AI. Notice he’s not trying to establish a timeline and I don’t believe he’s implying it’s just around the corner (5-10 years maybe :D), but that it’s something we will eventually reach. I believe we might even see it in our lifetime, roughly the next 50 years. I think you went from 0 to 60 on your assumptions here and all you’ve had to offer to refute it is some vague references to what amounts to magical thinking and a brief mention that you don’t think we’re close yet.

Just because your arbitrary measure for success has not yet been reached due to any number of technical and economic reasons does not mean that with a few (or many) further advances in technology we cannot reach even your measure.

Grabula said it above, it might take 5, 10 or even 50 years, but there is a very high likelihood we will get to where we are going.

hardnose: “As long as you take for granted that the brain works like a computer, and nothing else, then I think you will fail eternally”

As Gotchaye pointed out, you seem confused. No one is claiming the brain works *exactly* like current typical hardware or software. In fact, we know it in many ways doesn’t. But like a computer, the essence of what the brain is doing is information processing, agreed? It’s the whole reason we have nervous systems to begin with, to process information from and about the environment, and to act in the world.

Now with the right hardware and/or software, any program or computation that can be done on one type of information processor (brain) can be replicated or modeled in another (modern computers). Nothing the brain is doing that can be considered interesting or intelligent, seems to be anything other than information processing. That’s why Dr. Novella and most every scientist studying such topics are comfortable concluding we’ll be able to model the brain or human-level intelligence to any desired degree of fidelity once we figure out how this type of information processing is accomplished.

“If the brain worked entirely like a computer, I think 65 years would be long enough to get at least a tiny spec of AI working.”

There are undeniably lots of “tiny specs” of AI that are working, you just seem to have adopted a particular definition of AI in your head (try googling Weak versus Strong AI). Us lowly naked primates with only a few hundred years of modern science have invented machines that can identify faces or objects, translate written language, transcribe language from voice-to-text, detect fraud, land airplanes, drive cars, perform logisitics, play chess and Jeopardy. Underlying these advances is in most cases is 50 to 100 years of intensive research & development. This is undeniably intelligent information processing, in most cases performing brain-like computations, and on this point you are just plain mistaken.

“You do realize you are arguing with a practicing neurologist right? Dr. Novella knows more about the brain I imagine than you do, and understands the complexities involved in duplicating it.”

Novella doesn’t know very much about the brain; no one does. I studied both computer science and neuroscience, so I know enough to know how little we actually understand.

Novella thinks he knows an awful lot about the brain, and many other things. That is not the same as actually knowing, and understanding.

Even if you have memorized all the parts of the brain and can reciite the Latin names, and even if you have read and memorized every paper on neurology ever written, your understanding of the brain will still be minimal.

“We’ve made a ton of headway in computing power over a meager 4 or so decades. ”

Computers can be programmed to do certain kinds of things, we all know that already. EVERYTHING they do is foreseen and planned by the programmers. Robots can do specific and routine tasks, but nothing requiring actual intelligence.

If you have ever tried to communicate with one of the phone answering systems, then you know how utterly idiotic and brainless computers really are. Yet all the big companies are using them now, because a salesperson must have somehow convinced them this is real AI. If you can get through one of those “conversations” without screaming “F–K” into the phone, you are a saint.

Turing thought the real test of AI would be conversation, and artificial conversation has not progressed AT ALL.

And even the sensory and motor stuff is severely limited, however much the NY Times and others like to rave about it.

“Nothing the brain is doing that can be considered interesting or intelligent, seems to be anything other than information processing. ”

A computer takes in data, processes it, and outputs something. The brain does that, but more. I mentioned Penrose in my first comment — he explains that the brain cannot be a mere computer, since no computational system can understand itself (Godel’s Theorem).

A computer requires a programmer. That programmer could itself be a computer. Which also needs a programmer, which also could be a computer. But somewhere somehow, something has to be not a computer. It’s kind of like the Matrix movies, where systems are within systems within systems, etc.

I am NOT saying I know how this works; obviously no one does. But you could at least consider some of the objections to standard AI research.

Consider that AI research has consistently failed the Turing test, because the programmers cannot possibly foresee what humans will say. Except maybe in the most stupid and banal conversations. But in my experience with phone answering systems, they can’t even do that.

And btw we do have machines that do things other than process information. Your cell phone receives data, before processing it, for example. There is NOTHING at all in physics or biology that says the brain can’t include devices that receive data that does not enter via eyes, ears, etc.

So you are saying something you admit to not knowing a lot about (because no one does) is not going to be able to do what we think it might be able to do because these other things (which you seem to have little knowledge of either) are not passing one test that may or may not indicate something you seem to lack?

Penrose, while a respected physicist and author, is largely a crank to the AI community, simply because when he speaks about it, he doesn’t make sense.

“[Penrose] explains that the brain cannot be a mere computer, since no computational system can understand itself (Godel’s Theorem).”

No one claimed the brain could “understand itself” whatever that might mean. Godel’s Theorem involves, as far as I know, logical completeness of axiomatic systems, not an information-processing system being able to “understand itself”. Penrose is literally just having mental diarrhea with these concepts.

“A computer requires a programmer.”

Brains aren’t the types of information-processing that rely on familiar formal serial programming techniques. You seem to not be getting this.

In any case, the information processing done by brains WAS formed by outside forces, courtesy of a few billion years of evolutionary selection.

“But you could at least consider some of the objections to standard AI research.”

The AI community is *more* than aware of such objections; they have considered them and rejected them as philosophical meanderings and they have proceeded on with their work building all the fancy trinkets in our pockets, homes, and lives that you dismiss so casually.

“Consider that AI research has consistently failed the Turing test, because the programmers cannot possibly foresee what humans will say”.

You are assuming the Turing Test is the one and only legitimate goalpost for AI? The Turing Test hasn’t failed because programmers can’t foresee what humans will say. It’s because passing the Turing Test requires mimicking the function of basically an entire brain; which of course we are nowhere near and no one said we were, and which you claim to know we are all going down the wrong path. Please God don’t read Penrose or Searle for your “understanding” of AI.

I read Dennett and most of the pro-AI guys as well. I don’t agree with Searle, but I think Penrose makes a lot of sense. A computational system cannot have any perspective on itself. It must be a component in a larger system. This is a mathematical fact.

Or how about this one: pull-up your desktop computer, then click on the calculator, add up some numbers. You have just used one computational system (your desktop computer) to virtually model the information processing of another computational system (a handheld calculator). Philosophical problem solved.

Can you build a virtual computer inside a computer? I bet you can, although it would be slower than the computer itself.

Also, let me get this straight, the brain can’t be a computer because no computer can understand itself, but we don’t understand the brain according to you, so – there’s no problem.

In any case, I reject all these premises.

To say our understanding of the brain is “minimal” is absurd, and subjective. We have an incredible amount of knowledge about the brain. There is also an incredible amount we have yet to learn. But science progresses (once a field is well-established) not by invalidating older knowledge, but by deepening it. Our knowledge of the brain is getting more nuanced and complex, but is not invalidating what we already know.

It’s like saying we have “minimal” knowledge of genetics because of all the stuff we’re just figuring out. It misrepresents reality in a meaningful way.

“To say our understanding of the brain is “minimal” is absurd, and subjective. We have an incredible amount of knowledge about the brain. There is also an incredible amount we have yet to learn. But science progresses (once a field is well-established) not by invalidating older knowledge, but by deepening it. Our knowledge of the brain is getting more nuanced and complex, but is not invalidating what we already know. ”

Some people would admit that the more we learn about the brain (and nature in general), the less we understand.

“Can you build a virtual computer inside a computer? I bet you can, although it would be slower than the computer itself. ”

As I already carefully explained — every computer needs a programmer, and that programmer can itself be a computer. But ultimately you will need something other than a computer, since every computer needs a programmer.

Let me preface my comment by sayng:
1. I don’t buy any of the philisophical objections to AI for reasons already talked about. Godel doesn’t actually apply, and Penrose if just spouting sayings.
2. A machine that can really emulate complex functions of a brain is essentially an engineering problem – there is no reason it can’t happen eventually, once we understand the brain better and have the technology to instantiate said understanding. What we don’t have much of a clue about is how the algorithms the brain uses result in efficient cognition, to make the most oversimplified understatement of all time. But there is no magic.

That said….

AI has always annoyed me with their estimations of when we’ll get there (there being roughly Turing – inexact, but you get the point). Maybe this is part of what Hardnose is getting at. This is a little unfair, because many in the field do not exagerate progress, and everyone knows Ray Kurzweil is nuts (though an incredible genius as the same time). I was reading an AI article from the 70’s saying the by the new millennium we will have passed the Turing test. It seems that we’re always 30 years away. I’ve talked to many AI people from the CS end of things at Society for Neuroscience, and I get suprememly annoyed at their cavalier attitude re: reverse engineering the brain – we’re almost there according to them. BS.

I see us as very, very, very far away, and I see no major discovery that puts us in a substantially different position today than we were 10 years ago. But, if we don’t kill ourselves off as a species, I do think we’ll get there eventually. I think we’re some fundamental discoveries away, and the timelines on those kind of problems are notoriously hard to predict.

Just ask physics, which after a flurry of discovery ~100 years ago has not been able to reconcile their 2 most successful theories during a time when the tools they have had to work with have improved at a geometric rate.

“As I already carefully explained — every computer needs a programmer, and that programmer can itself be a computer. But ultimately you will need something other than a computer, since every computer needs a programmer.”

These are just colloquial sayings coupled with a sufficiently vague definition of “computer” and too much reasoning by analogy.

Steve – I mostly agree with you. I refrain from predicting when we will achieve human-level AI, you will notice. Perhaps we can say when computers are likely to be as powerful as a human brain, but that is different than fully self-aware AI. I wasn’t even really writing about that above, although it was interpreted that way.

In any case, I similarly refrain from saying it will be very far off. We tend to overestimate short term advance, but also underestimate long term advance.

Regarding reverse engineering the brain, of course we are a long way from fully doing this, but we are making exciting progress. What I am interested in is the interplay between AI and neuroscience – how will then inform each other? We are already modeling parts of brains, like cortical columns. It may all come together quicker than you think if this kind of research continues to progress.

But you are also correct in that we may hit a wall we don’t anticipate and be stalled for decades. That’s why you can’t predict timing.

Just one last comment for anyone interested on this topic: Sebastian Seung’s book “Connectome” gives an excellent and easy to read work about the state of the art in neural imaging & mapping, and how this info can be translated into virtual, computational models. It provides a sobering and fair assessment without going sensationalistic and gives a flavor of how hard the engineering problems are (regarding imaging, infor processing, info storage/retrieval, etc.) and also clarifies that many of the goals of Strong AI are certainly possible to reach, only a matter of time and hard work.

“I studied both computer science and neuroscience, so I know enough to know how little we actually understand.”

I’m calling your bullshit on this one. I’ve seen you claim to have studied “stuff” in other blog entries as well, seems to be a common tactic for you, implying YOU know what you’re talking about while the actual, proven neuroscientist writing this blog, does not. Which is patently ridiculous. Cruising the internet for interesting tidbits on brains and computer does not study make. On top of making bullshit claims you literally turn around and say you don’t know enough…Otherwise provide some credentials to prove any of the claims you make on on your background. Dr. Novellas is easy to find.

“Turing thought the real test of AI would be conversation, and artificial conversation has not progressed AT ALL.”

Really, no progress at all? So much for studying computers…So you’re hung up strictly on Turing and Godel? No wonder you think we’re not making any progress.

“A computer requires a programmer. That programmer could itself be a computer. Which also needs a programmer, which also could be a computer. But somewhere somehow, something has to be not a computer.”

So by your reasoning we can NEVER have AI or produce computers that mimic the mind because they require programmers? You need to get off Godel’s balls and spend some time studying his theorems I guess.

Aside from the current discussion I wanted to make some comments on the numbers and miscomprehension of some factors in this article.
First, Yes computers operate on binary math, but this is because it is easy not because they “have” to.
For instance a memory cell stores a bit as a voltage (Going very simplistic here).
Below a voltage it is a 0 above it is a 1. This voltage and demarcation line is “arbitrary” and set by many factors, overall operating voltage, materials, expected noise, etc. Older systems used higher voltages and thus had higher deltas between low and high (ie easy to read and differentiate between bits.) Now the voltages are much smaller and the margin is much much smaller too. (easy for a bit to flip if the stored voltage wanders.)
Note what I said there . . . “voltage wanders” The voltage IS analog and can be effectively anything between two points and the demarcation line to “call it” 1 or high – 0 or low is pick for best used to get binary. There is NO reason why you could not pick 2 or 3 or 10 different voltages that could correlate to some state/value.
It is just much easier to stick with binary high or low, then high, medium high, medium, medium low, low (for example) easier with error correction, easier to keep the values separate, easier for the logic.
(again this is simplified) but IC chips and most electronics do work rather analog already, the digital is a kind of a overlay, something I do not thing most people really understand.
The benefit of using a single cell to store mutable states is a bit one though and is being looked into, (quantum computing is the be-all-end-all of this BTW)
And yes if one cell could record 8 states you could record a full byte rather then a bit in the same place, with all of the saving that would entail, ie 1/8 of the cells needed to store the same info. but 8x the data loss if the cell has a problem.
Also, only just recently is multi threading really taking off. Till rather recently all computers did one process at a time really fast. Now they can do 4 or 8 or 16 simultaneously (though most software still does not take full advantage of this. (I work on a system that does 1024) all very fast and if the software supports all simultaneously. This is growing VERY fast also.
Now the Brain is not “fast” by being able to do any one possess fast (as always you can correct me on this since Brains are not my specialty) But by being able to parallel process very efficiently.
Computers are now being able to do the same thing on a hardware level (the software is still lagging here a bit) Add with multi state alto they could be growing in computing power many fold.
But again problem arise . . voltage can leak, a cosmic ray can hit a cell, the voltage range of a part can slide over time and use.
Management, error correcting, and most of all software become the biggest obstacles. Check out the Grid computing concept. The hardware is not really a big deal here.

So, in the end this is very interesting but nothing really new, There chips are trying to do more with less with multistate and parallel possessing – cool – I like it. The analog part IMO is not wrong but rather over emphasized. The big numbers are . . . big numbers that make everything sound impressive and kind of do more harm then good.
Also, I would like Dr Novella’s opinion on this if possible.
It is my understanding that the neurons work by changing the electrical potential (sodium ions etc) within the cell and triggering neurotransmitters. They are not transmitting electrical charges. So saying that the brain “runs” on electricity is . . not exactly correct. The cells “run” on ATP that allow them to live and create a electrical internal differential to make a complexly chemical transition to a nearby nerve. Right? So it would seem most all of these computer analogies and “power” use really do not map at all here when we are talking about a chemical wetware brain.

@taerog: “So, in the end this is very interesting but nothing really new”

There’s an awful lot more communication going on between neurons than in standard parallel computing, in which in general you aim to minimise communication overheads, which is really crucial to how the brain computes compared with parallel computers. Additionally with the new neuromorphic hardware stuff, if you can have circuit elements that directly implement the kinds of current flows in neurons, then you don’t have to simulate them digitally (with binary or whatever other scheme you like), so you remove computational overhead.

“It is my understanding that the neurons work by changing the electrical potential (sodium ions etc) within the cell and triggering neurotransmitters. They are not transmitting electrical charges. So saying that the brain “runs” on electricity is . . not exactly correct. The cells “run” on ATP that allow them to live and create a electrical internal differential to make a complexly chemical transition to a nearby nerve. Right? So it would seem most all of these computer analogies and “power” use really do not map at all here when we are talking about a chemical wetware brain.”

Neurotransmitters are charged ions, so when they move you have current flow (most models of neural activity describe only the neurons’ electrical dynamics, modelling the neurons as electric circuits). So yeah it is true that neurons “run” on ATP as they need energy, but the brain computes using electrical signals, as does a computer. The brain can do things that would be computationally useful if we could implement them, and it does these computations on very low power.

“# hardnoseon 30 Apr 2014 at 9:15 am
Man-made computers follow predetermined steps, and that is ALL they do. Yes, they can appear to make random choices, which might give an illusion of unpredictability. But they must be programmed to, at certain points, make selections based on a pseudo-random algorithm.
In reality there is nothing at all unpredictable about any computer.
”

This is not true, hardware random generators have existed for many years for those that need it. There are even hardware random generators included in the latest Intel chips (Intel Core i7 3770K / i5 3570K / i5 3550 Ivy Bridge)

“However, I believe we do much more than follow predetermined algorithms. There is always a leading edge that cannot be explained as mere computation. Every computer must have programmers, and there is something in us that programs our brains.”

There are plenty of self learning and genetic algorithm, where the software learns by itself.