Exploring Connectivity in the Brain’s Network of Neurons

The brain has long presented an amazing challenge for applied mathematics: A complete model would require ~1011 highly nonlinear neurons, interacting nonlinearly and stochastically through rapid voltage fluctuations (called “spikes”) on a complex and dynamic network. But we are in an especially wonderful time for the field, as technical breakthroughs are yielding unprecedented data about both the brain’s network architecture and about its activity. There is a major opportunity to integrate this data into a new understanding of how the connectivity among neurons leads to their collective dynamics and, eventually, to their astonishing collective function. Mathematics and scientific computation will play a key role; researchers worldwide, including members of the SIAM community, are hard at work. Our group and our collaborators have the joy of being a part of this effort, and I describe some of our experiences in what follows.

Emerging data on the network of connections among the brain’s neurons is revealing a sparse network that is complicated but highly nonrandom (Figure 1). Numerous features of “complexity” in the network have been identified, ranging from heavy-tailed distributions of connection strengths to “small-world” properties [1]. An irresistible question arises: What is the impact of this complex network architecture on the collective dynamics of the network?

In the next breath, we realize that there is no one answer to this question, and that what we find will depend on (at least) (1) the scale at which we characterize the dynamics and (2) assumptions about the dynamics of the individual neurons. Here, I discuss what can be learned when we make the following two choices: (1) For the scale, we consider collective dynamics across the network as a whole. We characterize this by the level of correlation, or coherence, \(C\), in the spikes emitted from pairs of neurons, averaged across the network. The value of \(C\)can determine, for example, how strongly the spikes of a network of neurons work together to drive spiking in other cells downstream. (2) For the dynamics, we assume that the spiking is stochastic and irregular at each cell. The results described here are due to work by B. Lindner, B. Doiron, and A. Longtin [5] as well as by Stefan Rotter and colleagues [6], and our team––featuring Krešo Josić and two (former) students, James Trousdale and Yu Hu.

We begin with a nonlinear stochastic differential equation for each neuron. The equations are coupled into a network via impulses that occur each time a neuron spikes. When these spikes occur frequently enough, the system can be approximated by a stochastic (point) process, in which the spiking times of each cell are linearly modulated by the past inputs it received. Use of transform methods then yields an explicit solution for the coherence \(C\) [2, 6, 7]. This solution involves the connectivity graph \(W\). At first glance, then, it seems that we have what we were after: a link between network architecture and dynamics, which we write \(C=f(W)\).

This link, however, is very difficult to use. We rarely have access to the full \(W\) for a neural network; with \(N\) cells, a direct approach requires \(~N^2\) measurements to test whether and with what strength each pair is connected, and this is often beyond the limits of experiments. What we need is to isolate the right “summary statistics” of \(W\) that successfully predict \(C\).

Figure 2. Network connectivity motifs.

The first step is to realize that \(f(W)\) can be expanded in terms of network connectivity motifs, or small subnetworks embedded within the larger connection graph (Figure 2). Notably, such motifs have been quantified in brain data (see Figure 1), and shown to be over-expressed compared with expectations under random connectivity. This expansion shows that \(C\) is determined by the frequency of these motifs––that is, the number of times they occur in the network [6, 7]. Thus, we have a link between the statistics of connectivity and network dynamics. However, the link can be quite complicated. This is because many motifs of increasing size (and hence increasing difficulty to measure) often contribute to the expansion. As a consequence, truncating it to predict the coherence \(C\) based on smaller motifs––those motifs that are possible to measure, tabulate, and compare across brain areas––isn’t guaranteed to work.

A key advance is to define new network statistics, which we refer to as motif cumulants [3, 4]. These isolate the excess frequency of each successively larger motif, above and beyond that which would be predicted from the smaller motifs that make it up (Figure 3). Intriguingly, the expansion of \(C = f(W)\) can be resummed explicitly in terms of the motif cumulants. The result is possible to truncate––and typically yields quite accurate predictions for \(C\), based on small (and hence measurable) motifs alone. This gives an appealing recipe for contrasting different networks: By counting the motifs that occur and comparing them according to the formula, we can understand why some produce more––or less––coherent dynamics.

Figure 3. Motif cumulant decomposition.

That said, the theory is far from complete. One issue is that it isn’t really a from-the-ground-up solution of the architecture–dynamics problem: The motifs described above capture “functional” as opposed to purely structural connectivity. For example, cells must be weighted according to their spiking rates when the motif cumulants are measured. A complete solution would predict these rates from the network architecture alone. Moreover, the theory requires a linearization of the point process model, around an operating point given by the spiking rates. This linearization will fail for networks at low spiking rates (rates cannot be negative), or those driven strongly out of equilibrium––inviting the (perturbative?) development of a nonlinear theory.

Another issue arises when we return to the definition of the dynamical coherence \(C\), which describes the average correlation among cell pairs—i.e., given any two cells in the network, how likely they are to produce voltage spikes at similar times. But are these pairwise statistics the end of the story? A subject of resurgent interest is higher-order correlations in the spiking of sets of more than two neurons, beyond what could be predicted from results for the constituent cell pairs. We are working with Michael Buice to identify the network motifs that drive higher-order correlations, using quite beautiful tools developed in theoretical physics.

This connects to a larger question for the field. Just what is the right way to describe population-wide neural activity? The approach we just described predicts successive terms in a hierarchy of statistical moments. But it is not clear that this moment-by-moment approach is the most efficient way to describe the population-wide activity. For example, recent work has identified low-dimensional random process or dynamical systems models that capture population-wide activity via fluctuations in “common modes” of activity. Understanding the origin of this dimensionality reduction, and building a predictive theory of when it will occur, would be a fascinating step forward.

Beyond an understanding of how coherent spiking dynamics arise from network structure lies an even richer question: What do these dynamics mean for the encoding of sensory information? It has become cliché to say that spikes are the language of the brain, but this underlines the point: Patterns of spikes produced in neural circuits are the brain’s only tool for making inferences about the sensory world and, eventually, driving behavior. So which features of coherent spiking are relevant to how the spike patterns carry information, and which are superfluous?

Two members of our group, Natasha Cayco Gajic and Joel Zylberberg, are pursuing an answer to this question in work that has a mathematical flavor similar to that of the network dynamics problem described above. Assuming that we know the coherent spiking among cell pairs, how does further coherence among larger cell groups add (or subtract) from levels of encoded information? The answer draws on statistics and information theory, and is sure to be complicated in general, but there appear to be systematic trends in how higher-order coherence, especially when its strength varies as stimuli change, contributes to coding. Thus equipped, Cayco Gajic and Zylberberg are studying whether and how such coherent spiking can emerge from the nonlinear dynamics of neural networks. It is research paths like this that I believe make mathematical neuroscience such an exciting and fun field. It always pushes us to move between mathematical disciplines––in our case, from basic information theory, to dynamics, to stochastic processes––making as many (coherent, we hope!) new connections as we can.

The SIAM News Blog brings together updates on cutting edge research, events and happenings, as well as insights on broader issues of interest to the applied math and computational science community. Learn more or submit an article or idea.