Peter Latham

Gatsby Computational Neuroscience Unit

Network dynamics and neural coding

My research focuses on understanding how biologically realistic networks carry out computations. More specifically, my goal is to understand how connectivity and single neuron properties determine the ability of a network to perform computations, and how synaptic learning rules, single neuron properties, and patterns of input determine connectivity.

In their most general form, computations in the brain are performed by local networks that transform population activity on input fibres into population activity on output fibres. Such computations can be quite complex and involve many areas, even for seemingly simple tasks. For example, seeing an object and reaching for it requires networks that take as input the whole visual stream, keep only the information relevant to the task of locating the object, transform the position of the object from retina-centred coordinates to body-centred coordinates, and then produce activity in motor cortex that activates the appropriate muscles. In addition, these networks must keep track of uncertainty: if there are, say, costs associated with making mistakes (e.g., when reaching into a hot oven), knowledge of likely errors in reaching movements is needed to guide behaviour.

For computations of this type, there are four main questions. First, how do networks extract the relevant information and transform it into a new representation? Second, how do they retain uncertainty, or even whole probability distributions? Third, what aspects of the neural activity carry information? And fourth, how can networks learn to solve these problems? To address these questions I combine analytical techniques, mostly stolen from statistical physics, dynamical systems theory, and machine learning, with large-scale simulations of networks of spiking neurons.