“Neuristor”: Memristors used to create a neuron-like behavior

The solid-state device has an output that looks like neural activity spikes.

Computing hardware is composed of a series of binary switches; they're either on or off. The other piece of computational hardware we're familiar with, the brain, doesn't work anything like that. Rather than being on or off, individual neurons exhibit brief spikes of activity, and encode information in the pattern and timing of these spikes. The differences between the two have made it difficult to model neurons using computer hardware. In fact, the recent, successful generation of a flexible neural system required that each neuron be modeled separately in software in order to get the sort of spiking behavior real neurons display.

But researchers may have figured out a way to create a chip that spikes. The people at HP labs who have been working on memristors have figured out a combination of memristors and capacitors that can create a spiking output pattern. Although these spikes appear to be more regular than the ones produced by actual neurons, it might be possible to create versions that are a bit more variable than this one. And, more significantly, it should be possible to fabricate them in large numbers, possibly right on a silicon chip.

The key to making the devices is something called a Mott insulator. These are materials that would normally be able to conduct electricity, but are unable to because of interactions among their electrons. Critically, these interactions weaken with elevated temperatures. So, by heating a Mott insulator, it's possible to turn it into a conductor. In the case of the material used here, NbO2, the heat is supplied by resistance itself. By applying a voltage to the NbO2 in the device, it becomes a resistor, heats up, and, when it reaches a critical temperature, turns into a conductor, allowing current to flow through. But, given the chance to cool off, the device will return to its resistive state. Formally, this behavior is described as a memristor.

To get the sort of spiking behavior seen in a neuron, the authors turned to a simplified model of neurons based on the proteins that allow them to transmit electrical signals. When a neuron fires, sodium channels open, allowing ions to rush into a nerve cell, and changing the relative charges inside and outside its membrane. In response to these changes, potassium channels then open, allowing different ions out, and restoring the charge balance. That shuts the whole thing down, and allows various pumps to start restoring the initial ion balance.

In the authors' circuit, there were two units, one representing the sodium channels, the other the potassium channels. Each unit consisted of a capacitor (to allow it to build up charge) in parallel to a memristor (which allowed the charge to be released suddenly. In the proper arrangement, the combination produces spikes of activity as soon as a given voltage threshold is exceeded. The authors have termed this device a "neuristor."

As it currently stands, the NbO2 neuristor uses too much power to put in large numbers on a chip. But there are other types of Mott resistors known, and the authors think that it should be possible to find one that's both low power and compatible with current chip-making techniques. They suggest there's a variety of ways the spiking behavior would be useful in existing applications. But I'm more intrigued with the idea that it might be possible to get more neuron-like behavior directly on a chip.

Sounds interesting. Though to emulate a neuronic brain, they still have to figure out how to make it dynamically sever and reinforce connections base on how frequent a connection is used. The next stage is then how to make a chip that can make more neurons, challenging challenges!

By applying a voltage to the NbO2 in the device, it becomes a resistor, heats up, and, when it reaches a critical temperature, turns into a conductor, allowing current to flow through. But, given the chance to cool off, the device will return to its resistive state. Formally, this behavior is described as a memristor.

This doesn't quite match my understanding of a memristor:

http://en.wikipedia.org/wiki/Memristor wrote:

When current flows in one direction through a memristor, the electrical resistance increases; and when current flows in the opposite direction, the resistance decreases. When the current is stopped, the memristor retains the last resistance that it had, and when the flow of charge starts again, the resistance of the circuit will be what it was when it was last active.

Sounds interesting. Though to emulate a neuronic brain, they still have to figure out how to make it dynamically sever and reinforce connections base on how frequent a connection is used. The next stage is then how to make a chip that can make more neurons, challenging challenges!

A memristor FPGA?

And i did wonder if these things could be turned into a neural "emulator" when i first watched the HP presentation of them.

I think they should be focusing more on energy usage, as all technology should. Energy use by organisms, and our brains are stupidly ridiculously low compared to computers, and as energy supplies in the world run down, we'll have less and less of a surplus to experiment with stuff like this. If memristors consistently use more electricity then their chip counterparts, then we'll never reach A. I..

So are memristors still on track to be entering the market within the next few months? ~October of 2011, HP's team said they were planning on getting them to market as a flash/SSD competitor within a year and a half (and a DRAM competitor by 2014, 2015 at the latest).

I have a feeling that their predictions were a bit... optimistic. Along with the predictions of flash's minimum size being way too pessimistic.

I think they should be focusing more on energy usage, as all technology should. Energy use by organisms, and our brains are stupidly ridiculously low compared to computers, and as energy supplies in the world run down, we'll have less and less of a surplus to experiment with stuff like this. If memristors consistently use more electricity then their chip counterparts, then we'll never reach A. I..

Seriously, where are my fucking organic computers?

So if your brain is burning about 400 kilocalories a day, that's about 20 watts, which is pretty good. But that's an average over the course of the day, if PET scans are right, the brain does burn more when you're actively concentrating.

It's pretty hard to spec comparable hardware to the brain; it's effectively a whole host of specialized processors. But the reason I'm skeptical of calling the brain energy efficient is because it's so slow, operating in terms of milliseconds when a microprocessor is in terms of nanoseconds.

I think they should be focusing more on energy usage, as all technology should. Energy use by organisms, and our brains are stupidly ridiculously low compared to computers, and as energy supplies in the world run down, we'll have less and less of a surplus to experiment with stuff like this. If memristors consistently use more electricity then their chip counterparts, then we'll never reach A. I..

Seriously, where are my fucking organic computers?

Well it helps that our brains are a liquid cooled, chemically powered and massively parallel. Every last task has its own dedicated section, rather than trying to do it all via context switches on a single, general mass of grey.

In a sense we operate like mainframes, where various subsystems have their own dedicated processors rather than trying to run it all on the CPU. End result is that the CPU on a mainframe can be much lower clock than a desktop CPU.

This is exciting news. Artificial neurons that behave similarly to biological neurons could take us to the next level of AI. Early efforts in AI tried to model neurons as logic gates. This method failed because our brains are prediction machines not touring machines. Simulating neurons in software is extremely slow, forcing researchers to use overly simplified models that do not have the pattern recognition or auto-associative memory capability of those in our cortex. If these Neuristors are as powerful as researchers suggest we may be one step closer to having true AI, machines that understand and learn like humans. If you’re interested in the theory of intelligence and machine learning, I highly recommend Jeff Hawkins' book “On intelligence”.

Sounds interesting. Though to emulate a neuronic brain, they still have to figure out how to make it dynamically sever and reinforce connections base on how frequent a connection is used. The next stage is then how to make a chip that can make more neurons, challenging challenges!

What would the point of that be? We can make some things now that function somewhat like neurons at a low level, but the technology isn't useful unless you can build something out of it that mimics neuron-like behavior at a higher level. That means you have to understand how the higher level works and whether the differences between memristors, transistors and neurons are even important at that level.

Sounds interesting. Though to emulate a neuronic brain, they still have to figure out how to make it dynamically sever and reinforce connections base on how frequent a connection is used. The next stage is then how to make a chip that can make more neurons, challenging challenges!

Knowing almost nothing about neurobiology, I'm speaking way out of my depth here, but I'm assuming this behavior could be emulated with a large quantity of neuristors, with pathways being deactivated and reassociated on-demand. Given the way memristors are supposed to work, that actually seems possible and after the power usage is taken care of. I can't imagine the software's going to look anything like we have now, though.Whether or not this research results in A.i. or not, it's would be interesting to see artificial neurons beat us on process size, latency, and power consumption, even if they're used for disparate tasks than the meatspace equivalent.

It's pretty hard to spec comparable hardware to the brain; it's effectively a whole host of specialized processors. But the reason I'm skeptical of calling the brain energy efficient is because it's so slow, operating in terms of milliseconds when a microprocessor is in terms of nanoseconds.

I've heard estimations that the human braion operates on the level of exaflops because of how massively parallel it is, so the fact that the human brain operates at a lower frequency isn't really that big of a deal.

If we could do exaflops of calculations for 20 watts things would be crazy.

So are memristors still on track to be entering the market within the next few months? ~October of 2011, HP's team said they were planning on getting them to market as a flash/SSD competitor within a year and a half (and a DRAM competitor by 2014, 2015 at the latest).

I have a feeling that their predictions were a bit... optimistic. Along with the predictions of flash's minimum size being way too pessimistic.

Their predictions were pretty much right on the ball. The needed processes have been worked out and can be used in production now, but HP and Hynix pushed back commercialization to the end of 2013 to avoid cannibalizing sales of flash memory just after they rolled out flash process improvements. In other words, it's ready, they're holding it back purely for business reasons.

It's pretty hard to spec comparable hardware to the brain; it's effectively a whole host of specialized processors. But the reason I'm skeptical of calling the brain energy efficient is because it's so slow, operating in terms of milliseconds when a microprocessor is in terms of nanoseconds.

I've heard estimations that the human braion operates on the level of exaflops because of how massively parallel it is, so the fact that the human brain operates at a lower frequency isn't really that big of a deal.

If we could do exaflops of calculations for 20 watts things would be crazy.

Here is a conference presentation from several years ago by an HP researcher discussing and demonstrating the usefulness of basic memristors (not neuristors) for AI. I found the discussion cogent and the demonstration impressive:memristor AI

I always thought that the voltage spikes were the equivalent of a one in binary.

Not quite. A neuronal spike only tells you what neurons are active, but the process that's actually happening is electro-chemical. The "chemical" part is where it gets tricky, because neurons carry a number of potential chemical signals that they could transmit at any given time (so...more than just a 1 and 0). A spike will only show us a pattern of activity, but not precisely what was sent. It's like going into a factory in a foreign country: with enough time, you can start to recognize the correlation between certain shouts with actions taken on the floor, but you don't know for sure what is being said.

I always thought that the voltage spikes were the equivalent of a one in binary.

Not quite. A neuronal spike only tells you what neurons are active, but the process that's actually happening is electro-chemical. The "chemical" part is where it gets tricky, because neurons carry a number of potential chemical signals that they could transmit at any given time (so...more than just a 1 and 0). A spike will only show us a pattern of activity, but not precisely what was sent. It's like going into a factory in a foreign country: with enough time, you can start to recognize the correlation between certain shouts with actions taken on the floor, but you don't know for sure what is being said.

To me the chemical part is the most irrelevant. How it is making the electricity doesn't matter. Sure the biologists & chemists can have a field day with it, but I care only about AI.

To me the chemical part is the most irrelevant. How it is making the electricity doesn't matter. Sure the biologists & chemists can have a field day with it, but I care only about AI.

The brain doesn't only use electrical signalling, but chemical signalling as well... basically, it uses two signalling methods simultaneously. It makes no sense to ignore a huge part of how the brain functions, if your goal is to use knowledge of the brain's functioning to spur advances in the field of A.I.

Neuron 101. The neuron generates spikes of voltage according to the neurochemicals that it senses. If the neuron gets stimulated enough, it generates enough voltage to cause the chain reaction that we know as 'firing', then it resets and the whole process can be stimulated again. When it fires, it causes a release of certain neurochemicals. Different neuron-types release different sets of neurochemicals.

As such, it is the rate of fire and the persistence of the neurochemicals released (rather than the amplitude of the voltage) that gives us the measure of how excited the neuron is.

The second important difference is that the neuron has a signal receiving part, and a signal releasing part. The signal receiving part can receive from up to 10,000 other neurons. The neurochemicals released from a single neuron are generally not enough to make another neuron fire, but with a number of other neurons firing, this will create enough stimulation for this neuron to fire. Again, just because this neuron fires, it isn't, by itself, enough to cause another neuron to fire. Further, the soup of neurochemicals that the neuron sits in can excite or inhibit the neuron, and these neurochemicals can be released from distantly located parts of the brain. So a neuron isn't just excited by the neurons that are touching it. It is this construction (I believe) that allows us to correlate different stimulus into a conception, i.e. we can receive an aural stimulation that allows us to recognise it as a parrot and then conjure up the image of a parrot, remind us of a parrot joke, make us wonder what John Cleese is up to these days, etc., etc., etc.

All this is to say that making a transistor that requires a certain voltage threshold to trigger an action potential isn't the whole story by any means. The key is the networking of each memristor with tens of thousand of memristors of differing lengths, all held together in a chemical soup that excites or inhibits various clusters of other memristors. This is to say nothing of modelling how the brain grows, the neuronal support system, the hormones that trigger neuron growth and neuron pruning (an adult brain is a lot neater than a child's brain), and so on and on.

Brains have a large feedback component, wherein the excitation of certain neuronal bundles will cause 'distance' neurochemicals to be released that can soak the whole brain, making other bundles of neurons easier or more reticent to fire. It's not enough to build a silicon version of a brain as it would simply be an inefficient set of circuits. The thing that makes a brain a brain is the wired networks that have in-built reward and repulsion mechanisms that encourage the organism to so stuff.

It is a fabulously complicated system. I can't wait until we have CPUs in our computers that can have a genuine panic attack. Can you imagine a machine which has issues with its motherboard?

Ok.. there is a lot of confusion about this... so here is the difference between a binary one and a spike.

First, the similarity: Spikes are all or nothing events, and it is not clear that their amplitude matters to the receiver, or if it even varies meaningfully.

The difference, the time interval between spikes is a real number, not a discrete multiple of some minimum decided by a clock frequency. The interval between the spikes is ANALOG.

Furthermore, neurons cannot have a sustained ON output, at best they can output a high frequency train of spikes, but there will always be an interval. There may be some minimum interval between spikes for a particular neuron, but beyond that, the interval between spikes is totally analog and not a multiple of some minimum, as stated before.

Also, in neural processing the pattern and intervals between spikes matter quite a lot. Frequency of spiking, for example, would matter, but much more complex patterns than just a simple frequency based spiking pattern are possible.

In neural processing incoming spikes are processed at synapses and in the dendrites as well as in the soma (body) of the neuron, and not only the frequency or other temporal patterns, but the comparative arrival time of spikes from different inputs can make the difference on how the spikes are processed (e.g. STDP).

Spikes might cause the "membrane potential" (an analog quantity) of the receiver neuron to exceed a critical threshold, depending on how they are processed, lead to the neuron releasing a spike of its own (for example an inhibitory input would reduce membrane-potential, and an excitatory input would do the opposite. Whether the input was excitatory or inhibitory would depend on the "sender" neuron type and the signalling chemicals released at the synapse). Spikes in certain patterns, including patterns involving multiple inputs, can also lead to synapses getting stronger or weaker. (STDP again, for example).

The exact laws governing what input patterns, synapse types, etc lead to which behaviors in the receiver neuron are quite complex. A VERY simple model would be to accumulate incoming spikes according to a synaptic weight into the membrane potential (forget about dendrite processing in this simple model) and simultaneously decay the membrane potential so when spikes stop coming in the membrane potential slowly decreases. If the membrane potential ever goes up beyond a certain limit, you have a spike! (receiver neuron fires)

So, neurons clearly do not operate like Boolean gates and spikes are clearly not digital signals, but maybe something in between typical digital and analog, (maybe with some of the advantages of both? Or intermediate signalling and error-propagation properties?).

Neurons are analog accumulators (in a simple model), and the synapses may well have analog weights (another key difference with digital processing), and, of course, in real life all sorts of weird continuous functions and differential equations with analog (real valued) outputs may be implemented by synapses, or combinations of them.

In the Blue Brain project, each synapse requires differential equations to simulate, so again, not a simple weight or a Boolean signal transmitter. I kept the chemical "implementation details" out of this and focused on the end-results, so I hope you can see why neurons are not gates and spikes are not binary ones.

Why? Hynix is about to roll out a new 12-15 nm flash process (looks like it's to become available first quarter of 2013). They can simultaneously come out with a new technology that will eat into their return on that very expensive investment, or they can hold off for a year. Their stated reason for delaying memristor ReRAM seems entirely reasonable.

It's pretty hard to spec comparable hardware to the brain; it's effectively a whole host of specialized processors. But the reason I'm skeptical of calling the brain energy efficient is because it's so slow, operating in terms of milliseconds when a microprocessor is in terms of nanoseconds.

I've heard estimations that the human braion operates on the level of exaflops because of how massively parallel it is, so the fact that the human brain operates at a lower frequency isn't really that big of a deal.

If we could do exaflops of calculations for 20 watts things would be crazy.

I think they should be focusing more on energy usage, as all technology should. Energy use by organisms, and our brains are stupidly ridiculously low compared to computers, and as energy supplies in the world run down, we'll have less and less of a surplus to experiment with stuff like this. If memristors consistently use more electricity then their chip counterparts, then we'll never reach A. I..

Seriously, where are my fucking organic computers?

Don't forget the energy budget of the support equipment. The human body uses quite a bit of energy.

This is exciting news. Artificial neurons that behave similarly to biological neurons could take us to the next level of AI. Early efforts in AI tried to model neurons as logic gates. This method failed because our brains are prediction machines not touring machines. Simulating neurons in software is extremely slow, forcing researchers to use overly simplified models that do not have the pattern recognition or auto-associative memory capability of those in our cortex.If these Neuristors are as powerful as researchers suggest we may be one step closer to having true AI, machines that understand and learn like humans. If you’re interested in the theory of intelligence and machine learning, I highly recommend Jeff Hawkins' book “On intelligence”.

Disclaimer: I work in AI on cognitive architectures....

Why do we need to model the human mind? Why do we need systems that can learn as humans do? What about systems which learn but differently from humans? Are those not possible? They actually are and we have made quite many leaps and bounds in AI.

Modern Cognitive Architectures do not necessarily model how the brain works visave neurons, rather they can model the functionality. By modeling the functionality we can use touring machines for AI and we do. Modern AIs use reinforcement learning as one of their methods of learning. However, AIs can use even more specialized systems for learning:

In AI there is different types of architectures and how they work. In what I'm going to talk about, I'm only going to go into one, a brief description of Soar. Soar has all it's knowledge in working memory in a node structure. There is the top state and then nodes for knowledge, Soar also uses impasses (an example is when there is no output action after a decision cycle, a state-no-change impasse). An impasses triggers a creation of a substate which encompasses the super state and so on. All via nodes.

Why am I describing that? Well because of episodic memory: Episodic memory is a look back memory which allows you to capture states of working memory and save them. This allows you to look back at your previous thoughts, this is similar to human's long term memory.

Why is *that* relevant? Well because learning can work on multiple ways. There is reinforcement learning which is learning through a positive and negative feedback loop, in Soar, this is done by modifying existing rules to change the probability of them being selected. However, Soar has other mechanisms for decisions. A big one is something known as "chunking" which is generating a rule based on a substate. Now this itself is not exactly learning like we would think of it but Soar learns a rule for an entire decision process in the substate, this speeds thing up a lot and when you add reinforcement learning on top of this you can get quite complex chains. Add in many complex and varied rules and general rules which can make decisions and trigger the creation of substates each of which can have rules which learn.

This is not exactly how humans learn as we perceive them but it is similar. We don't need to replicate how humans learn to create systems in which AIs can learn and get better. We can create AIs already which can understand and recognize objects in the world and understand phrases. Currently there is an ongoing project funded by DARPA for instance which uses Soar for processing language and then acting based on the language to move blocks and other objects around using a Kinect sensor and a robotic arm. We are already getting a lot closer than one would think to AIs which will be great for many purposes. True, no one has discovered sentience yet but do we really want a Skynet? (/sarcasm).

I personally do not understand the obsession to replicate human processes in AIs, why not make systems which work but don't necessarily do it how humans do it? Humans are very powerful organic computers, there is no way anytime soon we can replicate that level of computing power (in that way, neurons) in a way which can replicate human level intelligence. Why not just work on systems which can function and do quite well on current hardware?

I'm not criticizing this research and I do think it will be useful but do we necessarily have to replicate the human brain to create intelligence? That's the big question in AI and we do not know the answer but I don't think we do and I think it might be easier to not try to replicate the most complex part of nature we know, and rather focus on something achievable. That's not to say we should stop trying to replicate the human brain, I just think more research should be done on AI which is practical now. Once we get quantum computers, well..... things might change but for now why not focus on the stuff which we think it is achievable in the next decade?

I know some will disagree with this but this is not meant to be a criticism of any research, rather a call for more research into something we know (or at least think is highly probable) can be done?

Sounds interesting. Though to emulate a neuronic brain, they still have to figure out how to make it dynamically sever and reinforce connections base on how frequent a connection is used. The next stage is then how to make a chip that can make more neurons, challenging challenges!

Knowing almost nothing about neurobiology, I'm speaking way out of my depth here, but I'm assuming this behavior could be emulated with a large quantity of neuristors, with pathways being deactivated and reassociated on-demand. Given the way memristors are supposed to work, that actually seems possible and after the power usage is taken care of. I can't imagine the software's going to look anything like we have now, though.Whether or not this research results in A.i. or not, it's would be interesting to see artificial neurons beat us on process size, latency, and power consumption, even if they're used for disparate tasks than the meatspace equivalent.

Edit: Typos

I don't think software is even an applicable concept when considering how an irregular array of neuristors would work.

It's pretty hard to spec comparable hardware to the brain; it's effectively a whole host of specialized processors. But the reason I'm skeptical of calling the brain energy efficient is because it's so slow, operating in terms of milliseconds when a microprocessor is in terms of nanoseconds.

I've heard estimations that the human braion operates on the level of exaflops because of how massively parallel it is, so the fact that the human brain operates at a lower frequency isn't really that big of a deal.

If we could do exaflops of calculations for 20 watts things would be crazy.

Exaflops and we still need a calculator to do math.

fuzzy exaflops.

I've never met a human being who can achieve a single FLOP. Give us a break with the "my brain is an incomparable computer" shtick.

This is pretty interesting; one thing I suspect science doesn't know yet is whether there are "fast" memristor designs.If physics dictates that neurons (or something similar) are as quick to respond as such things get, then simulating a neuron on a binary computer will perform better than building circuits that have neuron-like properties.If the reverse turns out to be true, we'll be building von Neumann simulators on top of neural hardware.

This is pretty cool stuff, but it's important to remember that the goal here is to understand the human brain better, not to make a useful computer.

I veiwed this as the beginning of building the critical intermediary between computers and brains.

Computers excel at one thing. Brains excel at another. But, the weakest link is the I/O between the two.

Currently, we have to operate on a computer via our slow sensory input/output (keyboard, monitor, etc).

If the computer could shove its output over a neuristor connection that our brains could read from, then our brains make decisions and shove back to the computer ... neural links to computers would be fucking sweet.

It's pretty hard to spec comparable hardware to the brain; it's effectively a whole host of specialized processors. But the reason I'm skeptical of calling the brain energy efficient is because it's so slow, operating in terms of milliseconds when a microprocessor is in terms of nanoseconds.

I've heard estimations that the human braion operates on the level of exaflops because of how massively parallel it is, so the fact that the human brain operates at a lower frequency isn't really that big of a deal.

If we could do exaflops of calculations for 20 watts things would be crazy.

Exaflops and we still need a calculator to do math.

fuzzy exaflops.

I've never met a human being who can achieve a single FLOP. Give us a break with the "my brain is an incomparable computer" shtick.

Brains may not do floating point operations, however, they are doing quite a bit of something. Benchmarking brains against computers may be silly, however, don't begin to pretend, modern computer hardware can begin to simulate the number of (I guess I will call it switches) in even a mouse.