Attractor Networks, (A bit of) Computational Neuroscience Part III

Brains are comprised of networks of neurons connected by synapses, and these networks have greater computational properties than the neurons and synapses themselves. In this post, I am going to talk about a class of neural networks which I think are fascinating: attractor networks. These are recurrent neural networks with attractor states; these states and the dynamics governing an attractor networks evolution between attractor states endow these networks with powerful computational properties. Some attractor networks are useful models of neural circuits. It would be helpful to have a little knowledge of neuroscience and dynamical systems — fortunately for you my previous posts cover those topics: for an introduction to dynamical systems, you can read Part I, for an introduction to synapses, you can read Part II. You can probably get by without it though.

Attractor networks give rise to many interesting computational properties, e.g. categorization, filtering noise, integration, memorization [6]. These networks can shed light on the following questions. How can networks act stably and persistently? If we assume different neurons hold different pieces of information, how do neurons integrate this information? Furthermore, how can networks retrieve memories?

COMPUTATIONAL ENERGY of a collective-decision circuit can be pictured as a landscape of hills and valleys. The connection pattern and other physical characteristics of the circuit determine its contours. The circuit computes by following a path that decreases the computational energy until the path reaches the bottom of a valley, just as a raindrop moves downhill to minimize its gravitational potential energy. The surface shown here could represent an associative memory, in which the valleys correspond to memories that are stored as associated sets of information (x’s). If the circuit is started out with approximate or incomplete information, it follows a path downhill (colored arrow) to the nearest valley, which contains the complete information. Taken from [1] (Tank and Hopfield 1987) Collective Computation in Neuron like Circuits Scientific American 257 December: 104–114.Through the lens of dynamical systems theory, these networks are easier to understand and thus the questions above can be answered more easily. Dynamical systems theory provides us with two levels of description:

network space: The full state of the neural network, which is quite large and unwieldy.

attractor space: A reduced space of the full neural network. Only includes points on the attractors.

Let’s look at two examples of attractor networks. The first we will look at is the Hopfield network, an artificial neural network. The second we will look at is a spiking neural network from [3] (Wang 2002).

Hopfield Network

Hopfield networks [2] (Hopfield 1982 ) are recurrent neural networks using binary neuron. Although not a spiking network model, its . It is a model of associative memory.

Through the lens of dynamical systems, learning is achieved by adjusting the network so that the to-be-learned patterns become attractors states, i.e. if the .

Network

A Hopfield network is comprised of $N$ neurons $\vec{V}$ with thresholds $\theta$ (typically all identical and $=0$) and connections $W$. The topology of the network connections is simple: each neuron is connected to all other neurons except itself and all connections are symetric, or

Instructions:

To make the network learn your pattern(s), make sure the remember columns are checked and click .

Create a noisy pattern by clicking the randomize button.

To recover an incomplete pattern, click the .

Notes:

If most of your cells are off in each of your patterns, the network may create a fixed point where all cells are off

In this implementation, when settling, the network first updates asynchronously but on the final step updates synchronously. This is to (1) demonstrate asynchronous updating and (2) to save time, so don’t make any conclusions about the rate hopfield networks converge to a minima based on this.

remember

trash

pattern

Energy:

The main parts of the code is below. You can find the full hopfield/tensor code here.

This should be pretty familiar if you have worked with artificial neural networks (for you computer scientists) or rate-networks (for you theoretical neuroscientists): its a thresholded linear combination of input activations and synaptic weights.

Learning Rule

To store a pattern, $V$, in a Hopfield network, the pattern must become a fixed point attractor, i.e. $V = update(\langle V, W \rangle)$. By setting the values of the weight matrix according the Hebbian Learning rule, the patterns become minimum energy states. The Hebbian Learning rule is not the only learning rule (see: Storkey Learning Rule).

Spiking Attractors

Spiking attractor networks are fascinating: they suggest how brains are able to (1) act stably, (2) integrate information distributed across neurons, (3) recover memories etc. despite being composed of billions of seemingly cacophonous neurons. Lets consider a binary decision task in which a network receives two noisy signals: $A$ and $B$ and must choose which signal is stronger. Given the constraints of neural circuits, this is actually, quite a feat. If each neuron in each population receives a different Poisson spike train generated from its respective noisy signal, then each neuron within a population will possess different and potentially conflicting information about the state of the world. In order to perform the binary decision task, the network must be able to integrate information about both signals, which is distributed across many neurons and across time, and compare them. Additionally, in order for a selective population of neurons one selective population of neurons must drive the neural circuit and turn off competing coalitions of neurons.

Slow Reverberations

Below is an (approximate) implementation of [3] (X-J Wang 2002). This recurrent spiking network has 4 populations of neurons:

(#0-#399) a population of inhibitory inter-neurons.

(#400-1519) a population of background excitatory neurons which help sustain activity.

Essentially, the two selective populations recurrently stimulate themselves and inhibit each other through the inter-neurons. That is, they compete against each-other to drive the neural circuit. The reverberations (or the ramping up) of recurrent NMDA allows the network to integrate (or accumulate) information over time and across neurons to make a decision.

Here the network level is the membrane potential of each of the 2000 neurons and the activation of the synapses between each pair of neurons. The attractor level is the firing rates of two populations — a reduction of several orders of magnitude.

The default input to the model below is such that the network should make random decisions. Both selective populations receive spike trains of $40 hz$.

Instructions:

Restart the simulation, by clicking .

Tweak the simulation, by changing the parameters in the code editor below (feel free to email me if you want to do this but can’t figure out my admittedly messy code).

To restart the simulation with new parameters, click .

Notes:

If you don’t see the code box below, you’re on mobile. There is a video in the simulations place. Try it out on desktop!

AInput and BInput specify the input strength to the selective populations. It’s a good place to start tweaking parameters.

Alternatively, you could tweak the conductances between each population.

Above is a video of the simulation below, in case your computer runs this code slowly or if you are on mobile.

A raster plot of the attractor network's activity. A dot in $(i,t)$ means neuron $i$ fired at time $t$(Left) the attractor level description of this network. (Right) the weight matrix of excitatory synapses, i.e. AMPA and NMDA: $w_ij =$ weight from neuron $i$ to neuron $j$ Notice the populations are weakly connected to each other but strongly connected to themselves. Darker is weaker, lighter is stronger.