Sebastian Gerwinn

My research interests are in the area of Bayesian inference and computational neuroscience. In particular, I am interested in a characterization of the relationship between sensory signals and neural responses. The methods which I think are the most promising to tackle this kind of tasks are Bayesian methods.

The applicability of Bayesian methods is often limited by the fact that they are computational prohibitive. My main focus has therefore been to alleviate this problem by developing approximate methods which are then also feasible on a much larger scale and can therefore be applied to realistically sized data.

The main advantage of a Bayesian treatment lies in the explicit representation of the involved uncertainties. Having access to this kind of knowledge enables one to perform further analysis such as experimental design or model selection.

Stimulus Response Relationship

I have analyzed the relationship between stimuli and neural responses from three different perspectives: (1) the encoding, (2) the decoding and (3) the joint occurrence perspective.

In a first project I investigated the system identification task corresponding to the encoding direction of the stimulus response relationship. I developed an approximate Bayesian inference method which is feasible for models of generalized linear type, one of the most successful and commonly used generative models. As a result, we obtained not only particular point estimates of sets of parameters, but also model based confidence intervals, which in turn we used for feature selection and estimating the functional connectivity within populations of neurons.

Second, I analyzed the relationship from a decoding point of view. Here, using the leaky integrate and fire neuron model, I obtained a simple yet accurate decoding algorithm. Again, using a Bayesian treatment, it is possible to not only decode the most likely stimulus but also assigning to each stimulus the probability that it has caused the observed neural response.

Third, merging both perspectives, I looked at the joint occurrence of stimuli and neural responses. Using commonly used descriptive statistics such as spike-triggered average and spike-triggered covariance, I build a maximum entropy model. This model can then be used as a generative model as well as a decoding model exhibiting the same descriptive statistics as the observed ones, while assuming the least additional constrains due to the maximum entropy property.

Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential.
We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response.

We present a framework for efficient, accurate approximate Bayesian inference in generalized linear models (GLMs), based on the expectation propagation (EP) technique. The parameters can be endowed with a factorizing prior distribution, encoding properties such as sparsity or non-negativity. The central role of posterior log-concavity in Bayesian GLMs is emphasized and related to stability issues in EP. In particular, we use our technique to infer the parameters of a point process model for neuronal spiking data from multiple electrodes, demonstrating significantly superior predictive performance when a sparsity assumption is enforced via a Laplace prior distribution.

There are two aspects to unsupervised learning of invariant representations of images: First, we can reduce the dimensionality of the representation by finding an optimal trade-off between temporal stability and informativeness. We show that the answer to this optimization problem is generally not unique so that there is still considerable freedom in choosing a suitable basis. Which of the many optimal representations should be selected? Here, we focus on this second aspect, and seek to find representations that are invariant under geometrical transformations occuring in sequences of natural images. We utilize ideas of steerability and Lie groups, which have been developed in the context of filter design. In particular, we show how an anti-symmetric version of canonical correlation analysis can be used to learn a full-rank image basis which is steerable with respect to rotations. We provide a geometric interpretation of this algorithm by showing that it finds the two-dimensional eigensubspaces of the avera
ge bivector. For data which exhibits a variety of transformations, we develop a bivector clustering algorithm, which we use to learn a basis of generalized quadrature pairs (i.e. complex cells) from sequences of natural images.

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems