Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

kelk1 sends this article from the Stanford News Service:
"Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC (abstract). Kwabena Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed 'Neurocore' chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. ... But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. (...) Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies. By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore's cost 100-fold – suggesting a million-neuron board for $400 a copy."

I highly doubt it. The brain isn't just a random mass of interconnected neurons; it has a complex structure that we have yet to fully map out or even understand. Also, the inter-neuronal connections involve the release and re-uptake of neurotransmitters, which is itself a complex system that we have yet to fully understand in some cases.

Don't get me wrong -- for biological systems that we do understand, like the center-surround cells in the retina or the hypercolumns of the visual cortext, a chip like thi

Perhaps we don't? A model of something you don't understand won't give you insight into the unknown. Perhaps one might discover something like human intelligence but you'll never know if it is the same thing.

Also, I think that Godel (logically) and quantum effects (materially) stand in the way of understanding how three pounds of flesh can become intelligence and sentience.

If the human brain were so simple that we could understand it, we would be so simple that we couldn't.

I think a lot of the benefit from these chips is that we can try to simulate small brain structures with the expectation of failure. Then learn from that failure what new questions we should be asking.

Research on just a single slice of neurons leads to about a dozen research papers, and there are tens of thousands of such slices to be made through the human brain. Such research has led to improvements in automatic face recognition, motion stabilization for cameras and cochlear implants. Neurons are known to form similar groups known as cortical columns. These actually seems to overlap into each other and are replicated tens of thousands of times. Diffusion tensor imaging has provided a layout of the data

They have already simulated brain response on a small set of of simulate neurons connection, just with what we use now it would take a vase machine to scale it up, this, OTOH mean they can put it to practice really soon.

So, lets call it 15 billion neurons and 150 trillion synapses, or tens of thousands of synapses per neuron, ten times as many as this chip provides. That's going to be a problem. To say nothing of the fact that I would be very surprised if it allows for billions of inter-chip synapses which would probably be necessary to model the non-local interconnections common in the brain within the 240,000 chip brain simulator. And that's just for the cerebral cortex. You've got the rest of the brain to simulate as well.

Then there's the glial cells, which outnumber neurons by 10-50:1, and which recent research suggests may be considerably more involved in neural activity than presumed by the traditional "life support and other infrastructure" understanding.

Could be great for modeling larger portions of a mouse brain though. Maybe even to start modeling the simpler parts of a human brain. And we do have to start somewhere. I suspect we're at least a few decades away from being able to begin to simulate an entire human brain, and probably many more decades away from getting the simulation accurate enough that it might begin to actually function properly. After all the number one benefit of these simulations is to fail spectacularly in interesting way in order to help neuroscientists figure out what questions they should be asking.

Meanwhile we need to ask ourselves - if we're creating this simulation based on the human brain, then what are the odds that some form of consciousness dwells within it? And what sort of torture are we subjecting it to as it's simulation collapses? And does the knowledge we gain justify that price?

If you look at some of the critters with the smallest brains, like garden snails (around 10,000 neurons) as well as mice and rats then it should be very easy to simulate what they do - they even just have one neuron to control all their motion muscles (forwards, backwards, turn). Even their eyes are moved by a few muscles and extended using just blood pressure.

Say what? Our *intelligence* seems to be increasing, but I've not heard anything suggesting that the number of neurons in our brains are doubling every 18 months. And if doubling in 18 months isn't what you're after then you want "growing exponentially", Moore's Law is only a prediction of one very specific example.

And don't forget we've got two brains [scholarpedia.org]. There is also a new current in Cognitive Science rapidly gaining ground - Enactivisim [wikipedia.org] - which rejects the brain-is-everything paradigm common in the Computationalist approaches. Brains are definitely necessary but definitely only part of understanding what goes on with humans, or any other animals for that matter.

>Brains are definitely necessary but definitely only part of understanding what goes on with humans, or any other animals for that matter.

I tend to lean towards that belief myself, but *definitely* only part? Can I see your hard evidence for such a claim? This is science - everyone is welcome to subscribe to any school of thought they wish in the face of currently insurmountable unknowns, but it's important to remember that they are only a belief, not knowledge. Until there is hard evidence one way or

I would say the shoe is on the other foot. Show me a single intelligent, adult human without a body and I'll happily remove the "definitely":-). As far as science has been able to show so far, both brains and bodies are necessary. I think it's certainly possible that a body is only strictly necessary for the developmental phase but that's an empirical question to test. The key problem here is what we call "intelligence". If the definition of intelligence contains only logic processing, then obviously prett

You are quite right, I misread in context of the argument I was having with narcc. You can't understand the complete animal without the complete animal. The question is whether a consciousness can reside in a disembodied brain - certainly it would be at least somewhat different that the one being bathed in a sea of sensations and hormones.

As for computation alone doing the trick - when it comes to the incredibly crude "neural nets" currently used in computer science I'm likewise unconvinced. But with eno

All good stuff but I guess my issue is with the "given near infinite computing power". The real world with real agents in it is super duper complicated. The problem is that by the point where we have adequate knowledge of the body (including the brain of course), physics, chemistry and all the rest and computing power to simulate it all realistically, we'll have been able to create intelligent humanoid robots for a long, long time. Use the world as its own model, as Brooks would say. I argue that while it m

Why? I've spent time around farms and I like pigs, very similar to dogs, smart animals with highly developed personalities and social structures, will gladly eat their own vomit. They are often reared in horrendously cruel conditions and their minds certainly deserve better treatment, but I don't feel the slightest twinge of guilt when enjoying a bacon and egg breakfast, so I'm sure as hell not going to feel guilty about powering off the PC. - If the screams of the dying PC bother you, turn the speakers of

I'm glad you feel that way, because as it happens *you* are actually a simulation in my VR9000 entertainment system and, well, I've running low on processor cycles, and have been putting off terminating any of the virtual characters from a latent sense of guilt. Thanks for volunteering. Tell you what, the new playboy bunny expansion pack won't finish downloading over this slow ansible link for at least a couple more days, so you have until then to put your affairs in order.

I'm glad you brought up Glial sells. It's also possible the brain uses protein folding for memory storage and more than "binary" based signaling considering there are so many different neurotransmitters with different functions and an Axon can detect more than a few. Couple that with the possibility that the brain does holographic processing (my theory) -- based on evidence on hypothalamus signals to store memories -- the signal is broadcast throughout the brain to record the pattern of brain activity with

I don't see any reason why the computer would have to "get beyond binary", though doing so might well improve performance over simply simulating such capacity. In case you hadn't noticed computers are already perfectly capable of representing billions of states, even non-numeric things like letters and pictures. And the universe itself is discrete, just on a scale billions of times below our perceptual threshold. If some future chip could simulate the actual complex systems of neurons and synapses we cou

>Let's also not forget that it's pretty well-known that computational approaches to AI are untenable.

Citation? I would imagine a large part of the AI and neuroscience research communities would disagree with you. Not to mention that the fundamental nature of the universe appears to be computational, meaning that our own brains are on their most basic level computationally based, with a bunch of (presumably)random quantum noise keeping the whole thing non-deterministic.

I'd say that was a fair assertion based on the simple limits of our algorithms. If you have a calculator and you make it a billion times faster -- it's no closer to sentience. Brute force computing power is good for databases and expert systems -- but if they cannot derive "human like" in a billion years or 4 seconds -- it makes no difference.

Computation power can help simulations, and then if you have enough simulations you might get a neural net to choose better over time -- but that's not yet an AI no ma

You neglect the fact that all available evidence is that our universe itself, and everything in it, is discrete. It's only the fact that the granularity is millions of times finer than our senses can perceive that gives it the illusion of being continuous.

I agree that simply taking a modern neural net and making it a million, or even a billion times larger it's unlikely to magically gain consciousness, but that's not what's being discussed. If we actually simulate the individual neurons and synapses in an

Many excellent papers I'm sure. I'm going to assume they all suffer from one major flaw though: they all discuss hypotheses lacking in any conclusive experimental evidence. And without that evidence we don't *know* anything, we can only believe it. Or not. Reserving judgement would seem to be the rational response.

These sorts of simulations are the first steps in the science to actually understanding, scientifically, how the brain works, and from there hopefully how exactly it relates to the mind. And i

I could, but this is Slashdot, where reading TFA is extra credit, you expect me to search out papers to skim for the sake of a mediocre discussion? And the fact remains - all we have now is the negative evidence that the things tried so far have apparently failed to create a mind - whether that's due to flawed hypotheses or only insufficient scope and/or detail we have no idea. Because honestly, the only attempts we've made at strong AI so far have been based on a preschool-level hypotheses of how the min

I could, but this is Slashdot, where reading TFA is extra credit, you expect me to search out papers to skim for the sake of a mediocre discussion?

You got me there, but you did ask for them!

You never answered - do you honestly propose that a 100% accurate atomic-level simulation of a specific, fully developed brain, connected to a full complement of prosthetics (virtual or otherwise) for I/O, could not support a consciousness?

Do you honestly propose that a 100% accurate atomic-level simulation of a specific, fully developed, rainstorm, could flood my basement?

Don't confuse a simulation of a thing for the thing itself.

I am quite aware of the limits of behavioralism, but are you suggesting that observing the behavior of an organism has no value at all?

>You got me there, but you did ask for them!Touche! You have the honor of being one of the very first to have ever actually offered them. I hope you didn't put too much effort into it.

>Do you honestly propose that a 100% accurate atomic-level simulation of a specific, fully developed, rainstorm, could flood my basement?No, but I do propose that a simulation of a gaming console can play the games on completely different hardware. A sufficiently detailed simulation will act in a manner indistinguishabl

I ask again: Do you propose that a consciousness is dependent on a soul or other influence that could not be adequately simulated? Because that's the only way I see that you can honestly argue that a perfectly simulated brain could not harbor one.

That's because you assume that computation alone is sufficient. That point which is not only very difficult to defend, we have many reasons to believe it's false.

Simulated water will get nothing wet, just as simulated fire won't burn your fingers, no matter how detailed the simulations. You don't need to posit a soul or some magical attributes to water and fire. Plain old physical reality is sufficient here.

I assume that computation alone is all there is. To the best of current understanding the universe is governed by absolute and immutable laws. There is even serious discussion among the foremost experts in the theoretical physics that at it's most fundamental level the universe may literally *be* mathematics, and all appearance of physicality is simply an illusion based on our point of reference. Computation as the fundamental nature of reality.

I see. I don't think we're going to get very far. I just can't make that metaphysical leap.

Given enough processing power, I can predict with perfect accuracy the behavior of an atom. Or a hundred atoms, or a billion. Or a hundred trillion trillion, which is getting to the scale of a human brain.

You disagree:

The random noise of quantum mechanics notwithstanding of course. That's easy enough to add in, but it does tend to destroy the predictive capacity.

We can go round and round on this, but frankly you don't seem to have anything to say

Well, I was going to argue against computationalism. Of course, I'm not going to make any progress at all, given the confession you make in that first sentence. How could I? I've repeated a rather simple point twice, though I now see why it is meaningless to you. If we can't even come close to some common ground regarding the nature of the universe, a complex topic like the mind is going to be imp

>You disagree:Not at all. That randomness is a problem for prediction, not simulation. Random noise is relatively easy to add, even if it is impossible to predict.

All right, you want to continue - name me one thing that is not computable. Any thing, any where that is in principle not computable. That is not absolutely bound by computable laws governing it's behavior at the smallest level. Even QM is bound by computable laws, it's only the particular state into which a particular wavefunction collaps

Actually, I said that it was unlikely further discussion would be productive as you believe that "computation alone is all there is." It is therefore impossible to further discuss the topic (computationalism) as the conclusion you've come to is demanded by your metaphysical assumptions.

To be frank, I think that is incoherent. If you want to clarify your position, I'll indulge you. But as it stands, we're at an impasse.

The best I can do is respond to this:

All right, you want to continue - name me one thing that is not computable

I'm perfectly willing to continue the conversation based on the postulation that a metaphysical component is a necessary precursor to the mind, but you don't seem to want to do that. And you can't have it both ways. If the mind is rooted in the atoms of the brain, then it can be simulated, because we know the rules those atoms obey, and they *never* do anything else.

>To be frank, I think that is incoherent.Then you haven't been following modern physics. It's been understood for a long time that there

Well, don't waste your time with me. If what you suggest is true, you can revolutionize computer science! Everything is computable! The halting problem solved! Get to writing -- fame and fortune await you!

Do you accept that atoms and their interactions can be accurately simulated?

My argument boils down to two points.1) The behavior of the atoms and photons within the brain is sufficient to explain consciousness (at least when coupled with a sensory-motor environmental feedback loop). True or False?2) Computation is sufficient to simulate the behavior of atoms. True or false?

Starting with (2) True, depending on the nature of the simulation, this cannot be denied. If you want to put a qualifier like "perfectly" or something on it, then it becomes either "false", "unknown", or "unknowable". I'd offer the same for a qualifier like "in principle".

On (1) The only legitimate answer can be "unknown". To claim otherwise is to either make a religious claim or to draw such a conclusion from a set of metaphysical assumptions.

2) I imagine most physicists would disagree with you. Quantum mechanics introduces uncertainty to the system, but to the best of our ability to test it that uncertainty is pure, true, random noise. But okay, I'll concede that conceivably we're missing something that might be impossible to simulate. Lets just be clear that we're postulating new physics here.

1) Quite the contrary. To the limits of scientific understanding the atoms, photons, etc are all that exists in the univ

That sounds perfectly reasonable, considering this is an area that not at all understood. This doesn't imply that such new physics would be impossible to simulate. Though, as I've stated before, the ability to simulate it or not is complete irrelevant. Why do you think this is relevant?

To propose that the behavior of those atoms are insufficient to generate a mind is to explicitly propose that some unknown force never before observed is at play.

Again, I'm saying that computation is insufficient. You seem to be having an awful lot of trouble with this point. What do you think I'm suggesting here? It doesn't seem like we're having the same conversation.

I agree it would likely be painless, though I doubt the discontinuity of suspend & resume would be any worse than being sedated against your will. Or you know, going to sleep every day. If it was contained within a virtual world that was suspended along with it would likely experience no discontinuity at all. How could it?

If we create an AI by simulating the brain of an existing person (and you can't really build a simulation of a "generic brain" without a much better understanding than we currently h

Well, we know our brains exist in a realm that, to the best of our current understanding, contains only deterministic phenomena and random quantum noise, which does have some uncomfortable implications for the existence of free will. On the other hand free will certainly does seem to exists, so clearly a sufficiently complicated system is capable of delivering at least a convincing illusion of it.

To the contrary. It's perfectly possible to believe in all manner of magic, a soul, a divine creator, etc. without assuming those things are necessary precursors to the existence of a mind. It could be that we create an artificial mind that lacks a soul, to whatever effect that might have. Terminators anyone?

Or from another perspective - if we accept that the brain is only the infrastructure, and that a soul must also alight upon it to give rise to a mind in the union, that does not imply that a soul coul

What difference does it make whether the consciousness resides in software on a computer, or organic structures? We know perfectly well we can "adjust the gates" in an organic brain, even if we're currently still doing so at a very crude level. That's the entire premise of neuro-pharmacology, deep brain stimulation, neuro-prosthetics, etc.

As for the sensory perception feedback system, I agree that might be necessary - but it should be far easier to connect a simulated brain to sophisticated simulated neur

IBM's blue brain project [bluebrain.epfl.ch] has been simulating real brains by painstakingly mapping slices of rat/cat brains onto their software model for more than a decade now. IBM's "Watson" appears to me to be an spin off from that project. The Jeopardy "stunt" proved Watson is indisputably superior to the best humans at general knowledge questions (an open ended problem domain). IBM have developed similar 'brain on a chip" technology and have been using it for a while now. The hardware that won the Jeopardy games a coup

The interesting thing is going to be, what happens when Watson starts replacing knowledge workers? One lawyer with a Watson could do the work of dozens. They're mostly just querying a body of knowledge. Dr. Watson could replace your GP. You tell it what hurts, it asks you some questions and spits out a diagnosis. You'd still want a human in the loop to catch obvious errors, or perhaps to enter the queries and interpret the results, but still, you're looking at putting a ton of highly educated people out of

One lawyer with a Watson could do the work of dozens.....you're looking at putting a ton of highly educated people out of work.

Precisely. Watson is already good enough to pass an oral exam for a GP's licence, it's now being used as an expert assistant for medical research, having devoured the medical text books and journal papers of mankind, it can find relationship and patterns that humans have failed to notice. It won.t be long before someone teaches it how to develop software, more importantly it will learn how to extract the broad requirements for that software from the companies archived emails and documents.

You're not wrong, but I don't see this automation creating more leisure time for the average person. There will be an awful lot of people who cannot get jobs because there is not enough work to be done. The rewards will go to the robot owners, and the former workers will be out of jobs. The "you don't work, you don't eat" mentality permeates our culture, so I guess people will have "leisure" time as they starve.

I know it's been said before, but I don't think so many people have been on the brink of having t

Neurons have incredibly complex behaviors, they are not simply threshold triggers as the simple CS model implies. Neural networks in CS have little to do with the actual wiring and primarily chemical systems that are neurons. A little bit of cognitive neuroscience taught in universites would cure most CS majors of this idea that they can get AI simply with a "neural" net made of simple triggering model neurons.

Neurons have incredibly complex behaviors, they are not simply threshold triggers as the simple CS model implies.

You're plainly ignorant. I don't have any threshold triggers in any of my neural networks. Cells have complex protein behaviors, so what? The cybernetic models can be Turing complete. This means that if I really wanted to waste CPU power instead of understanding the fundamental principals of cognition, I could build a neural network that emulated the molecular action of cellular proteins, and if our rate of computer advancements holds that machine intelligence would be able to emulate the molecules that make up human neuron proteins, and eventually an entire human head right down to the molecular level. Artificial neural networks can yield every bit as much complexity as anything else in nature. Did you forget that electrons are made of quantum particles or something? Now, we're shooting for determinism and thus applying quantifications in most cases, but in the future we'll harness things like eddy currents once our n.net model methodologies have nailed down and abstracted more of the key components that emerge of complex behaviors efficiently.

Neural networks in CS have little to do with the actual wiring and primarily chemical systems that are neurons.

Nor do the artificial neurons need to have anything to do with organic ones except very basic fundamental properties which produce the complexity of response and thus intelligence. I suppose next you'll be telling me that without putting a human brain in the boxen we won't be able to make personal computers do mathematics.

You are what I call an organic chauvinist. What's so damn special about the precise chemical functionality of organic brain operations? If the organic chemputers were such a grand and complex design in need of exact duplication to achieve any degree of similar intelligence, then why are dumb computing machines even able to revolutionize computation? How are digital cameras doing facial recognition with far less computation power than human brains require? It's true that organic neurons have more internal state and some of the details of the process by which neurons operate are still undiscovered; However, we don't need to achieve the exact nuanced behavior of human neurons or even the same human brain neuron capacity scale or even its same connectivity types in order to produce intelligent behaviors. There are some general principals at work that any complex system will exhibit in order to achieve a given behavior, and those are worth emulating in an optimized fashion. Nature has converged upon solutions randomly using trial and error and going with the first working attempt the entropy gives her whether it is optimal or not. Replicating every detail of said accidental functionality exactly is not essential any more than it is essential for creatures to have 4 legs in order to walk.

It's already been proven that complexity yields intelligence. The more neurons the smarter the entity. In fact, we have been determining the minimal degree of complexity required to solve various problems, and nearly universally we can solve the same problems with far less complexity than the equivalent solution in nature, since organisms weren't intelligently designed. There is no binary dichotomy: An interaction does not reach some threshold and then magically becomes intelligent. Instead, there is an intelligence gradient: All systems exhibit some degree of "intelligence" AKA processing power, and the amount scales with complexity. Even a run of dominoes has some small degree of intelligence. Human brains have a lot of neurons doing stuff that isn't even required to produce sentience (thermal regulation, breath control, motor skills, etc). In fact, you can take whatever estimate your cognitive neuroscience prof claims the human brain has as a yardstick for the complexity requirement of sentience and [youtube.com]

What's so damn special about the precise chemical functionality of organic brain operations?

Look, until you manage to understand consciousness precisely, there's lots of room for more complexity there than what you normally think of when you talk about a chemical reaction. For instance, we keep finding out that our senses are based on quantum effects. What if it turns out that consciousness is dependent on them as well? At minimum you'd need a quantum computer to get those results. And doesn't intelligence as we understand it require consciousness? Do we have some massive parallel iterator, or is

To bolster the appreciation of organic processes -- I'll say that one cell in the human body has more capabilities and complexity than any single factory on the planet yet created by man.

Grab a few million base pairs while folding and copying the blueprints, construct any one of a million organic molecules, repair itself, and then requisition more materials all on the head of a pin with room for a few thousand more factories? Oh, and while protecting itself from countless biological saboteurs and access att

AC: It's not an invocation of "quantum god-head" to state the fact that quantum behaviors are observed in our sensory and perception organs and that we probably need a better conception of quantum mechanics to match some of the computation aspects of human beings.

Dinkypoo: We don't know if there is anything special about the brain and its particular computation structure, but we're making progress on a lot of fronts very rapidly. I think the summary of the long post is that *thus far* nothing about the brai

I agree with you about the quantum processes. It's kind of like attaching "nano" to anything small including chemistry. Quantum processes might be part of every-day and ordinary events like using a compass to find the magnetic north pole -- in fact, Quantum has to be part of nature because everything is built on it by necessity.

And there are many organic processes that while complex, are not efficient. Wires can transmit signals many times faster than our nervous system for instance.

You are making good points here -- but nobody was arguing them on this thread.

I'm glad you are really good at cybernetician. It seems like you've been waiting a LONG time to pounce on someone stepping on your turf.

the attempt to dissuade interested CS folk from becoming cyberneticians would be equally as foolish as decommissioning the Large Hadron Collider before discovering the Higgs boson.I'll bet anyone a dollar that the Large Hadron Collider will not discover the Higgs boson. And anyone dissuaded from b

Just because human intelligence may not be based on simple triggering does not mean that the model wouldn't work for a computational AI. Or are you arguing that anything produced by such mechanics wouldn't be "true" AI?

Good old clueless tech journalists, followed by slashdot editors just copy pasting.

The chips aren't 9000 times faster than a typical PC for general tasks. Specifically, they can simulate neurons 9000 times faster than a PC can simulate neurons. Pretty typical of any ASIC with a limited set of a highly specialised functions.

It isn't a typical ASIC; the chip is a custom fully asynchronous mixed digital+analog; the board uses 16 chips in a tree router for guaranteed deadlock prevention between the chips; and can simulate 1 million neurons powered only by one USB port.

The neurons are implemented with analog circuits to match the dynamics of real neurons, moving beyond a simple hodgkin-huxley model to include components like ion channels, which is first of its kind in an analog chip. It has a neat hexahedral resistor network that distributes the spike impulse across a neighborhood of neurons, a phenomena seen in many cortical brain areas; essentially an analog phenomena implemented efficiently in analog design.

Analog gives it fun biological-like properties, with things like temperature sensitivity that must be regulated with additional circuitry. Asynchronous design means outside of leakage from the chip, which is low with such a large fabrication process, very little energy is used at a neuron level if no stimuli is present. This is in contrast to a traditional CPU, which has a clock marching along lots of a chip to consume energy every clock cycle.

Outside of wireless/signaling stuff, this is probably the biggest mixed analog digital asynchronous chip in existence.

It will be a while before we can understand just how important such circuits can be. It may be that they simply supplement CPUs already in use. And much will depend upon just how deeply we can program such a device as well. It may well be that the worst path to take would be to try to get a machine to think like a human. We humans are a bit on the defective side. How well can we think when we have a history of electing people like George W. Bush as President? The evidence at hand is that humanity

9000 times faster than a PC, if that PC happens to be running the specific artificial neural network simulation implemented in hardware by this chip.

Not that I'm knocking it. A GPU implements specific algorithms to great effect. But a GPU's algorithms are ones that are interesting for a specific application (drawing texture-mapped polygons), whereas an artificial neural network still needs another layer of programming to do something useful. In other words, a Word Processor implemented on this chip would not be 9000x faster than a Word Processor implemented on a CPU. A face recognition algorithm, on the other hand, might see a decent fraction of that 9000x, although it remains to be seen whether this chip would be a better fit for any particular application than a GPU (for example).

At $40,000 to perform the task of 9,000 PCs, you'd need the PCs to be $4.44 each in order to match the price to performance ratio of this board.

I imagine the kinds of computing networks being used for neural simulation research are well in excess of that $40,000 price tags, so why wait for mass-production to bring the price down further? There's value today. Unless of course something is horribly off in the reporting.

It would depend what you want to do. A person doing neuroscience would usually want to make their own neuron model, which is bound to differ from what is hard-coded in this device. A neuron model can be anything from a simple sigmoid function (but you can handle tens of millions) to a detailed electrochemical simulation.

It’s very very different; nerumorphic chips have been around for ages, they use the same phenomena the brain does (ion-flow across a neuron's membrane) using different a method (electron flow across a silicon membrane).

The big difference is that they make use of analogue computation using the physical properties of electricity to model whatever you’re trying to model, whereas digital computers model things by representing quantities as symbolic values.

In 1989 I was doing billions of connections per second on DataCube finite impulse response filter hardware to do the weighted sums, and hardware look up table for the sigmoid mapping for trainable multisource image segmentation for around $40,000 in off-the-shelf VME bus hardware, but that was in 1989 dollars, so I guess there has been some advancement.

of a brain. How do they plan to get and process that much I/O? Oh, and what storage scheme are they planning on?

This might be yet another step on the way to a truly sentient artificial intelligence - but even if these things live up to their promise, we still have a long way to go before we artificially create consciousness. Incidentally, how will we know that we've succeeded? The old "ability to ask the question is the answer" rule doesn't apply, as the device could easily be programmed to ask - or mi

At 400$ a pop, I'd be willing to shell the cash to have access to this kind of chip/board. There's at least one direct application I'd like to try: source code analysis. The current tools are quite powerful, mind you, but I'm sure the pattern recognition capabilities of such chips should be a lot better at pinpointing ill side effects, inefficiencies, memory leaks and such.

The article is misleading - they are not 9000 times faster than a PC for general tasks. The chips can simulate neurons 9000 times faster than a PC can simulate neurons, but there's no mention of how fast those simulated neurons can solve a problem for you.