A crucial step towards quantitatively understanding biological vision is to accurately characterize the input/output characteristics of retinal ganglion cells. Building on a modeling framework introduced by J.Victor, we have constructed spiking models of primate P and M ganglion cells. The driving inputs to our model cells are time-varying images. Spike trains are generated as follows: Weighted sums of image intensity values are formed by applying separate center and surround spatial weighting functions. These signals are passed through distinct adaptive temporal filters, and then summed. The combined signal is finally fed to a noisy integrate-and-fire spike generator. Most of the model parameters are estimated from published measurements; the rest are adjusted to bring the model behavior into agreement with a broad range of experimental observations.

Our model serves two purposes. First, it acts as a realistic “front end” for future studies of geniculate and cortical function. Second, it addresses a current controversy: Traditionally, ganglion cell responses are described as sequences of randomly occurring spikes, whose probability of occurrence (rate) varies smoothly with the stimulus. Several investigators have recently challenged this view, arguing that under “natural” conditions, ganglion cells use a sparse code, in which responses consist of long silences punctuated by brief, precisely-timed bursts of spikes.

However, natural images cannot be generically defined—they encompass a very broad range of statistical structures. Accordingly, we hypothesize that ganglion cells use a continuum of encoding strategies. To explore this idea, we have systematically studied the dependence of model spike train structure on the power spectra of artificially generated movies, and video of natural scenes. In agreement with this hypothesis, our model smoothly shifts between “sparse” and “continuous” firing modes, depending on the statistics of the input patterns.