Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

crabpeople writes "Researchers at the University of Calgary have found that nerve cells grown on a microchip can learn and memorize information which can be communicated to the brain. 'We discovered that when we used the chip to stimulate the neurons, their synaptic strength was enhanced,' said Naweed Syed, a neurobiologist at the University of Calgary's faculty of medicine."

Perhaps a key use is not to use neurons to improve silicon chips, but to do the opposite.

Who knows, in a few decades we might have people deleting their childhood to store and smuggle hundreds of GB of information about the cure for a major epidemic that an evil pharmaceutical company is exploiting for profit.

People get divorced and lose their families and free time due to the high demands of the current marketplace.

People needing to do more work each day take pills to reduce the need for sleep.

Employers needing to cut training costs develop the "Plug N Work" chip. When you get hired you are assigned a read only chip that has all of the companies policies, procedures, employee names, and specific work duties for each task.

Employers add wireless to the PNW chip to rapidly update corporate policies as they are implemented.

The tasks and skills for your job (doctor, lawyer, tech support, etc) are duplicated by a firm that sells the chips to your company. Your wage just became minimum because now ANYONE can walk off the street and perform the function.

Wireless communication reaches the brain level and we go from being worker drones to Borg drones. This eliminates the internal need for teleconferencing, e-mail, telephones, or bulletin boards. Your pr0n and Slashdot time at work become obsolete in the new order as everyone would know what you were doing.

Underground hackers develop technology to override The Companies' chip and deliver slashdot, goatse.cs, and pr0n unbidden to all recievers in the area.

George Orwells dream of the though police and ultimate revisionism become a reality.

Paranoid? That's a beautiful vision. We no longer have to waste time on training or put up with incompetence: everyone will be equally competent, everyone will be able to perform their task perfectly.

Minimum wage? No, you're not thinking this through. This is a true commodification of labour. The entire economy will have to change to accomodate this idea... and it will be fantastic! This is something that Yevgeny Zamyatin would've loved to include in his utopic novel We.

i don't think people would be equal because the good chips would probably still be owned by bad (read: greedy) people. i mean, in theory it sounds like a nice utopian marxist wet-dream, but there is too much inertia keeping the system the way it stands. and personally, any sort of wild-und-crazy hive-mind is not something i'd ever want to participate in. the distractions would be omnipresent (you think video games are addictive now?) and any sort of rational, thoughful, political or philosophical discour

What you guys are failing to take into consideration is what is the difference in heat given off between a resistor and neuorons? Even if neorons are slower and larger, the fact that they can be packed together without need for cooling makes them much more powerful/useful..

Well, I find mine useful anyways, I am sure some people have mized results

Neurons are much larger than transistors, but the two aren't really comparable. The main body of a neuron is usually around 25 microns (25000 nm) in diameter and runs at a clockspeed only in the kilohertz max.

A neuron is much more than a transistor-like switch. On the one side of the neuron's central body is a set of dendrites that connect to and gather input from other neurons. The average neuron might have a thousand of these dendrites.

The synapse at the end of each dendrite acts like part of a multiply-accumulate term -- taking the signal from an other neuron, multiplying it by a numerical coefficient and summing it into the total excitation level of the neuron's body. I suspect that the precision of this multiply -accumulate process is fairly low -- perhaps 8 to 16 bits.

Next, the body of the neuron has a long axon extending from it that sends the output of the neuron to other neurons (connecting to the dendrites of other neurons). This axon can be quite long, millimeters, even inches, in length. Thus, the axon is like an off-chip line driver with the potential to have a very high fanout (of a 1000 or more). (On a modern microchip, these off-chip connections are driven by much larger transistors than the small 65 nm ones used in computation).

Third, a neuron is not a static multiply-accumulate system. The coefficients on each synapse change in response to long-term adaptive processes. This process is computationally complex and includes cross-correlation of inputs between synapses and processing of other chemical signals in the brain. Cross-correlation alone could require the equivalent of several kilobytes to several megabyts of RAM. (We won't even get into the adaptive processes that include physical growth and removal of dendrites as this has no easy analog in hardware)

In summary, a neuron is more than a transistor-like switch. Its a free-running 1000 register multiply-accumulator with an off-chip line driver and a statistical processing engine that updates the coefficients on each of the multiply-accumulate terms. Thus, emulating a single neuron would require hundreds of thouands to millions of transistors.

But it clearly would be folly to try to emulate a neuron using purely digital computing techniques. You're dealing with an analog mechanism that is pretty much a wire-or of many inputs feeding into a capacitor. This is very much an analog computing circuit; now the question is how efficiently you can do A/D-D/A conversion on this scale.

(And as I recall, the sciatic nerve running down your leg is a single cell with an axon over 1 foot long. Definitely some impressive stuff Mother Nature has concocted...)

Or, for a more software interpretation, it's a function that takes a bunch of boolean parameters and returns a boolean.
Anyone who's ever done any programmation or computer architecture should see why you can easily process anything with this.

Or, for a more software interpretation, it's a function that takes a bunch of boolean parameters and returns a boolean. Anyone who's ever done any programmation or computer architecture should see why you can easily process anything with this.

Excellent point. You are right about the computational flexibility of neurons. They can represent a wide range of logical functions, although I believe that the single neuron is incapable of doing an XOR.

But a neuron is more that a Boolean circuit. Although a neuron seems like a two-state device (its either quiesent or its firing), it is more of an N-state analog device in which the pulse-rate encodes a numerical quantity (probably the equivalent of an 8 to 16 bit floating point number). That is why the dendrite field is like a giant numerical multiply-accumulate.

Excellent point. You are right about the computational flexibility of neurons. They can represent a wide range of logical functions, although I believe that the single neuron is incapable of doing an XOR.

Actually, I think it can be done (or at least a partially working XOR.) Imagine a neuron with two inputs and an output. But these inputs are not both excitatory: one is excitatory and the other is inhibitory. So, input only from the excitatory branch produces an action potential, and input from both bra

But a neuron is more that a Boolean circuit. Although a neuron seems like a two-state device (its either quiesent or its firing), it is more of an N-state analog device in which the pulse-rate encodes a numerical quantity (probably the equivalent of an 8 to 16 bit floating point number). That is why the dendrite field is like a giant numerical multiply-accumulate.

You're right on-- the change in firing rate relative to the baseline firing rate is very important. Also, there is some reason to think (logica [ucla.edu]

I think parent (along with some other posts) are confusing the biological neuron and the perceptron, which is a simplified mathematical model.
While the perceptron can't cope with linearly inseperable problems (like XOR), there is no consensus on the computational limits of the neuron. In fact, very little is known for certain about the learning algorithm used by the nervous system.
The neuron may learn not only through the weights of its inputs, but also through chemical interactions with glial cells. Really, the neuron is still too much of a mystery for us to know its limitations.

Ok, now thats a good explanation of why humans can so (mentally) easily manipulate objects in 3d space without doing any math.

I've always figured that the best design for a computer would be one that's able to "imagine". Since it would take too many transistors to emulate a neuron, maybe there's some other way to do it? Is binary the only way to compute?

"We discovered that when we used the chip to stimulate the neurons, their synaptic strength was enhanced,... "
There was something like this in one of Asimov's books. The guys synapses are enhanced by a machine, then the guy starts to "feel" and "manipulate" things.

But this is very exciting. The idea that we could grow neurons on silicon is one of those big steps that looks to lead us into the Johnny Mnemonic world that Gibson was talking about just a couple stories prior to this one.

There is a song that says, "It only takes a spark to get a fire going". So too is it true that it only takes a couple neurons to start synapsing. As these true neural webs become more complicated, it would be interesting to see if any kind of emergent behavior was evident.

Also, with the current political and scientific climate as it is, this could be the first step to replicating a nervous system without having to rely on fetuses for stem cells. It requires no human cloning and holds immense promise.

It would definitely be cool to have a couple of these chips implanted to enhance the base memory that we are kitted with at birth, that's for sure!

The idea that we could grow neurons on silicon is one of those big steps that looks to lead us into the Johnny Mnemonic world

No it's not. This involves interfacing with the neurons that are already there.

As these true neural webs become more complicated, it would be interesting to see if any kind of emergent behavior was evident

Given that large collections of neurons are well known to exhibit emergent behaviour, I think it would be more interesting if they didn't.

this could be the first step to replicating a nervous system without having to rely on fetuses for stem cells. It requires no human cloning and holds immense promise

Nerve cells harvested from an animal brain can be grown in the lab. There is no need for embryonic stem cells or cloning at all. Growing them on silicon does not make this easier - in fact they will probably grown better in a petri dish.

It would definitely be cool to have a couple of these chips implanted to enhance the base memory that we are kitted with at birth

Memory in the brain is not simple storage of information. It is unlikely that pluggin a DRAM into your brain would be able to enhance your memory.

Since you are using a ridiculously little part of all the storage space you have available at anytime in your life.
The cool thing, would be to have an electronic device that could strenghten a give synaptic path, allowing you to "refresh" your memory at will, and not forget important things.
(like read the whole C++ W/ libraries reference once and then refresh this everynight)

I am not so much interested in the Hollywood vision of this, although Ice-T deserved an Oscar for his performance. What I think is interesting is to think about the limits of our brains and how this could be used to expand consciousness.

I think it would be interesting to understand how a neural interface would 'feel'. What would a process based in ones and zeros feel like? How would the brain adapt to take advantage of the new processing capability? Would we be able to project our consciousness outsi

No. You're right, growing neurons on silicon is nothing new, but the breakthrough here is that they have been able to stimulate the neurons into forming new connections, rather than just measuring the response of existing networks.

It's kinda funny, a few years ago (back in the 80s) my dad actually did this. Believe it or not, he was the first one to grow a neuron on silicon (a Motorola chip for those interested). The poster with the electon micrograph of it was absolutly everywhere (we had 1000s of the posters in the basement). I even rememeber going to highschool science and, sure enough, there was my dad's poster.The hype surrounding this was insane mostly due to fact that everyone thought this was the true start to cybernetics. In the end, the hype died down, My dad's lab got a ton of grants and he got back to doing more research. Ironically enough, the most publicisied research that he did (the neuron on a chip) probably had the least impact.Such is the world of science at times:)So, yes, it's nothing new. Just repackaged.

Quote from the above linkThis particular chip has no electrodes. The grillwork design allows the neurons to grow, and contains them indefinitely. We are currently building full chips with this design, and with electrodes.

Keep an eye out for this page. Once we get fully functional chips, it shouldn't be long before I can show some real experiments and data.

I think the big news is that electrodes were on the silicon chip, and were actually able to "learn and memorize information which can be communicated to the brain" (as per the original article).

Also, the page looks like it hasn't been updated since 1995. I wonder what happened to this project. From the page Maher and Thorne seemed so close to what has just been acheived in Canada.

Potter has done a lot of work on the project since then and electrodes were defintely incorporated. He has linked the cultured network up to a variety of output devices, including a stylus device to 'draw', onto a robot to manuever, and a DOOM-like virtual environment.
http://www.gatech.edu/news-room/release.php?id=160
http://www.wireheading.com/roborats/hybrots.html

Weird -- I remember reading an announcement on this subject on Usenet back when I was in university. What's more, I was able to google for the original article [google.ca] from January, 1991:

Hello. I just wanted to inform the netland that a direct nerve to transistorinterface is finally operational. The invention was privately announced 1month ago, but is now out in the public. It is possible now to grow a nerveover a silicon substrate in a way that the nerve has a capacitive connectionto a FE-Transistor built into

Maybe it's time to admit that nature does a better job bruteforcing (OK , what else do you call SEX and EVOLUTION) the secrets of this world than all our mathematical precision.. (E=MC2... Forty Two... naah... doesn't work)... Of course, nature did a better job making us humans than we would have achieved...:)

Evolution != bruteforcing. With bruteforcing (e.g. trying to guess a password with a dictionary) there is no "being on the right path" or whatever. It's just wrong or right. Evolution is survive of the fittest, do minor changes in different direction on an existing system and let see which one will lead closer to success.(Just like sex;-)) Take many of the fittest and do the same again. The some time take some of the not so fit and try as well the same.

On the other hand you are right: This trial and error seems to lead to better results in the long run compared to deterministic creation. But this scheme is already adopted by science. IIRC there was a distributed computing project simulating a robot with a defined task and changing the parameters of the robot. The different clients exchanged the information about the results. I don't remember anymore the name or the homepage of the project, I think it was already 4 or 5 years ago...

But from a logic point of view: If a generation with severel individuals, each of them with minor changes compared to it's ancestors, is born, for some individuals their changes will be an advantage, for some the changes are a disadvantage.

The weaker individuals will not spontanously die, but they might have fewer chidren or maybe only few of their children will survive. The stronger individuals will have more children, or if they have th

The key is realizing that the first Q/A pair doesn't really have anything to do with the way evolution actually works. A better way to put it is, "Some survive, some don't, and we designate the former group as 'fit'." IOW, "fitness" can be defined (exclusively) as that collection of traits which leads to survival and reproduction.

It's tempting for us, as humans, to believe that we represent the peak of evolutionary fitness. We don't; no organism does, because fitness isn't static. What traits are usefu

I disagree. Nature had a lot of time to "bruteforce" things. Give us the same amount of time and we will see what we'll be able to do in terms of "reengineering the world".Modern science is a 400 - 500 years old thing. Nature had billion of year to reach the levels we see.I think that the progresses we are achieving in the last 50 years are *really* impressive, and probably what we'll see in the next 50 years will be even more impressing. Sometimes humans deserves more credits IMHO.

We discovered that when we used the chip to stimulate the neurons, their synaptic strength was enhanced

If only they could find out how did the strength increase and wether we can do the same to the human body we can find a cure for most of the nervous system degradation diseases. Anybody have link to a more verbose article?

it's something called long-term potentiation, and neuroscientists have known about it for a long time. if you get a neuron to fire enough, its synapses will strengthen. It's been a while, but I believe the mediating mechanism involves calcium-triggered protein synthesis.

FYI, LTP is one of the most promising mechanisms proposed for explaining how long term memory works.

Yes that remains, but lets say if the strength increases when you have some additional compounds introduced. For example if the reason for the strength increase is silicon, then we can counter the effect of the underlying disease. The disease may remain in the body, but with regular "doses" of whatever caused this increase in strength maybe we can counter the side-effects to a great degree. I look at this as a breakthrough not only for the computing field, but also for the medical field. With the way we are

Not making faster Pentiums or Athlons. Sorry. Most of that magic has already been woven. Who out there is qualified to make systems level designs and decisions about bio computer systems? Think about the type of knowledge it must take about physics, electrical and computer engineering, as well as biological knowledge.

What type of magnetic and power restrictions will there be? Reliability? What type of optimizations will exist? Interfaces? Flexibility?

We're still quite far away from having things like this be applicable to modern day but think about when you too can say, "I know Kung Fu"!

You know I had read somewhere that our brains (individual processes) run at around 200MHz (as it is all electro-chemically done), now if you say that we have hundreds of billions of neurons, so do we have billions of transistors on chips.
The difference here is that our brains use the 3rd dimension effectively (and also work in parallel, I think). Now I'm not sure if the latest breakthrough uses electro-chemical processes to communicate, but if it's faster than 200MHz, it definitely has huge potential.

Actually i think its a big mistake to think of the brain in terms of cpu computing power. The brain does not simply use brute-force computing power to solve problems and handle special tasks and situations. We got a lot of built-in or learnt features to do so efficiently. For examples we use a lot of shortcuts to (not always correctly) solve a task. Just think of optical illusions: the brain uses some cues to judge a situation instead of doing a correct calculation. Or think of reflexes: a lot is happening just before the brain is involved, e.g. when you put your hand on some hot thing (yep, that athlon that has been running for 2 weeks straight;)) the signal to take the hand away gets sent straight from the spine before it reaches the brain.

Unless we use equivalent mechanisms for cpu based computing comparing the speed of the brain to silcon based units imho doesnt make much sense.

Yes, Neural Networks by there very nature are really nothing more than adaptive filters, aka classifiers. The eye does a great deal of preprocessing such as edge detection, motion detection, etc ?aka classifying? to reduce the work load on the brain. Neural Networks could perform similary preprocessing to reduce the work load for cpu based image reconition systems.

Actually the idea of "reflexes" is the same as electro-robots which can since objects by electrical load. That hot plate is nothing more than an over threshold input that cause electro-motor response. An electrical circuit could also be easily designed to include a little bit of fussy logic via a simple anolog circuit to achive the same thing.

So, equivalent mechanisms are not readily available for cpu based computing, but there are for ANN based computing. If we ever hope to match the basic capabilities of animals we can not just rely on cpu based computing, we also need ANN based computing for sensor preprocessing and feed back controlled motor function

The precise properties of individual Neurons are unpredictable and highly variable. Worse, they require constant life support just to stay alive. A 5 minute power interruption to your neural CPU and it's time to go shopping for a new one. You would certainly not want to build a practical computing tool out of them.

Neural computing will remain the domain of highly specialized research into AI and neural computing forever. We may develop neural analogs using nanotech or some other gee-whiz tech, but they will not be true neurons.

Neural computing will remain the domain of highly specialized research into AI and neural computing forever. We may develop neural analogs using nanotech or some other gee-whiz tech, but they will not be true neurons.

I disagree, I think neural computing will have practical applications, but more in the lines of neural interfaces than actual computers. Imagine a prosthetic(sp?) arm that works just like the old one did...

It seems like both of these difficulties (unpredictibility and constant power) could be overcome. The manufacturing process would have involve a training stage, where the neurons would be put through a series of routines until the connections between the neurons were at the correct strength. As for the power interruption, enough backup power supplies and advanced warnings would make this an unlikely event. As long as the chance of power-loss was less than the chance of hard-drive failure, it will be a sella

Researchers at the University of Calgary have found that nerve cells grown on a microchip can learn and memorize information which can be communicated to the brain.

While the article mentions this in the introduction, it doesn't mention this happening at all in the research. It talks about neurons communicating with each other. This is a long way from connecting this chip into a living brain in an animal that can still function.

While I agree that this is a fascinating article, we should make sure not to sensationalize it too much. Making chips that interface with actual brains in actual animals, even if they are snails, is still a long way off.

In the year 2250, a small pocket resistance of humans find the means to develop an organic gooker. Using the power of jelly to disable our circuit boards, they start a highly accurate military campaign to overrun the machines...

Tron and Tran, are a simple couple thrown together in this all-action, pistol pumping, explosion-full chase between man and machine. Will their love be enough to conquer the invading h

scientists stimulated one nerve cell to communicate with a second cell which transmitted that signal to multiple cells within the network.

Singal up (probably down too, though that is not said). That's a start. Now let me jump.

Imagine how this would feel in your own brain. Even strengthened to noticeable level by a lump of neurons, the signal would still read "beep". Now imagine being fed information through that channel. "Beep, bip beep bip bip beep". Better start training that morse.

Now let's enhance the input by adding more bits into it and running data through a digital-to-analog converter. This is where you would slowly be able to "see colors", one at a time. Low signal, cold feeling; high signal, hot feeling. That is brainable information. You can associate different patterns of these "colors" to different ideas.But still it's not like you could see any shapes, is it?

Now add more bytes, feed them in side-by-side. That's a feed. At this point, feel nausea. Something is feeding noise into your thoughts, something you cannot possibly comprehend.

Would take a processing system not unlike vision inside the brain to translate that feed into experiences like colors, tastes, touches, then further associate these to make shapes out of the noise.

A long way.

Worth taking, of course, as research goes, but I wouldn't toss away those external displays as of yet. Have a hunch computers won't be the same, either, when we get there.

Future research will focus on interfacing silicon chips with the human brain to control artificial limbs and develop "thinking" computers.

That's the beauty of the brain, though. It can make sense of the strangest of inputs. The very nature of neurons and connections in the brain means that if you were to introduce an "input" into the brain using a technique like this, given time, there's a very good chance that the brain will eventually make sense of it. After all, it's a very good learning computer, and this is really no different to the information sent via the optic nerve.

Imagine trying to describe vision to someone who's been blind from birth. It's nigh-on impossible to explain, as it's unlike anything else they can experience. This is what we're seeing here - a new sense we just can't comprehend, yet could offer us such incredible benefits we can't hope to fully understand at such an early stage as this.

given that, in laymans terms, the brain adapts to changes (people who get a lump of brain chopped out can adapt slowly over time to accomodate this sometimes) it may be possible to implant the very young with mini interfaces which supply a feed.

Now - feed simple messages such as 'food' or 'your mum' or 'barney' into this interface to train it to associate the feed with whats going on around it.

You never know - your brain may well start treating this as a new sense - and you would potentially have some mor

In a press release read before assembled journalists, Intel Corp. announced that
growing neurons on Pentium class chips would contravene the DMCA, by allowing competing engineers to directly download chip information into their brains.

When pressed further, the spokesman stated that he couldn't be sure, but
believed that growing neurons on AMD chips would however not contravene any laws.

RIAA executives were unavailable for comment, but an anonymous source indicated
that at least one executive has been ad

Alan Cooper, author of "The Inmates are Running the Asylum" and other texts put it this way:

Q: What do you get when you cross a camera and a computer?A: A computer.

His point is that from an interface and place-in-the-world point of view, most products that have been digitally enhanced tend to remain closer to their technology roots than their analog counterparts (with all of the usability, and I would say ethical, challenges inherient in a technologist-driven system).

That said, this is pretty frickin' cool, but the double-edged sword presented by this innovation seems both particularly sharp and far reaching. I really hope we get this one right.

Q: What do you get when you cross a camera and a computer?A: A computer.

Perhaps I'm missing the point (I've never read the afformentioned book), but when I cross a camera and a computer, I usually get a camera. Digital cameras are exactly this, no? The question seems a silly one. When we started making bridges out of steel did they somehow become something other than bridges?

A camera is a thing that can capture pictures and later reproduce them. You can use film, or silicon to do that, but it's a c

With research like this going on, will we eventually see a medical solution to tinnitus?

Tinnitus is a serious problem to a lot of people today, and it can have many causes, from various diseases/illnesses, to noise damage. It apparently has to do with the nerves in one's ear, so would this kind of research, might we finally see a way to actually treat tinnitus?

Until you get T, you don't realize how lucky people who can actually be in a quiet room without going mad are...

Perhaps it will be possible to make brain implants that enables you to connect to the internet and let others connect their brains with the net as well. Imagine sharing brainpower, or even share thoughts, ideas and memories over a filesharing network.

I am for one waiting to get it in my head. I am sick and tired from using clamsy keyboards and mice with eye-hurting displays. I don't want to read from display - I want to feel it. I don't want to type on keyboard - I want to think it.

Of course it opens a new filed for hacker attacks. But it won't stop us to have anyway, eventually. Certainly they will work in area of brain firewalling.

At first time as a simple solution I could use my personal laptop as a gateway connecting me to to the rest of the wor

At the end of WWII research director Vannevar Bush predicted the IT revolution. [theatlantic.com] He was eerily right in many ways, but some things are still to come. For some time I had the following quote hanging on my wall:

In the outside world, all forms of intelligence whether of sound or sight, have been reduced to the form of varying currents in an electric circuit in order that they may be transmitted. Inside the human frame exactly the same sort of process occurs. Must we always transform to mechanical movements in order to proceed from one electrical phenomenon to another?

'We discovered that when we used the chip to stimulate the neurons, their synaptic strength was enhanced,'

...and when we added the RFID, the test subjects had great futures working for Wal-Mart as they could communicate directly with the pallets of merchandise. In 2.0, the store employees will automagically know when the Gillette razors need to be restocked.:-)

IAAN, and this is not a big breakthrough in any sense. Basically, this is something that was first done using manually-positioned electrodes probably twenty years ago, and now they can grow neurons on a dish that has electrodes built into it and do it that way. WoO-hAH!

The computational power of neurons comes from the way they work in groups, not the way they work alone. Therefore, it's strongly dependent upon the detailed organization of their connectivity. Grinding up a piece of brain and regrowing it on a dish will obviously not retain native connectivity. Additionally, the time it would take to manually rewire an interesting circuit by giving little localized electrical pulses (or do anything else interesting) is longer than neurons are viable in culture, and that's not a problem that's been solved yet.

I'm not saying this technology won't have important uses as a research tool, just that it won't be useful for what people here seem to think it will be useful for (high-density pornography storage). BTW, one of the more interesting characters in this field is Steve Potter [gatech.edu], a somewhat strange guy who does some technically impressive work [uwa.edu.au]

"Future research will focus on interfacing silicon chips with the human brain to control artificial limbs and develop "thinking" computers."
Thats one heck of a leap forward from connecting x number of snail nerves together.