Project AGI

Building an Artificial General Intelligence

This site has been deprecated. New content can be found at https://agi.io

Monday, 16 March 2015

Another look at the retina

by David Rawlinson and Gideon Kowadlo

Why the retina is worth a deeper look

Recently, we have been looking at the retina - light sensitive cell layers inside the eyeball, that detect wavelength and intensity, and compute some initial encoding of this data that is transmitted to the brain proper via the optic nerve. The retina is very interesting and informative because:

- The retina is smaller and less complex than the cortex making it easier to reverse-engineer.

- The retina has fewer layers of largely local processing than e.g. the visual cortex.

- The retina does not receive feedback from other parts of the CNS, so processing is a unidirectional process rather than a more complex recurrent one.

- The retina only receives input from the outside world, in the form of light. Therefore, unlike other parts of the Central Nervous System (CNS), the input data to the retina can be controlled, allowing precise experimentation.

- You don't have to remove the retina or risk insertion of foreign objects to experiment on the retina. For example, the retina performs an encoding of light wavelengths detected by cells sensitive to different frequencies into an opponent-color representation [1]. This gives rise to perceptual artifacts we can all observe, such as "Impossible colours" we can only perceive via careful trickery [2].

- The retina is "an extension of the brain", not an entirely unique structure. What does this mean? During foetal development, the retina initially develops within the brain but is later pinched off, only remaining attached via the optic nerve [3]. There are many similarities between CNS tissues and retina tissue, for example in immune response and disease pathologies, that enable retinal experiments to inform brain research [4].

The role of dendrite computation is significant because the traditional artificial neuron (used in e.g. most recurrent and convolutional artificial neural networks today) has linearly weighted dendrites that are all integrated simultaneously (not hierarchically). The weighted-sum is then passed to an activation function, which is usually nonlinear. It's a good reminder that this simplified model may not be good enough - or perhaps not efficient.

Retinal cells with multiple levels of dendrite branching and simple dendrite threshold testing would seem to fit the HTM cell model [8] better than the conventional ANN one. The complex dendrite architecture of some retinal and HTM cells might be functionally hierarchical; at the very least, if there is an integration and test in each dendrite prior to the cell soma integration, then each cell is a 2 level hierarchy.

The HTM cell model. Note that the cell can be activated by any of the dendrites on the right using a logical OR function; each dendrite responds to a unique combination of active input synapses (blue dots). A separate type of dendrite encodes sequential synapses from prior cells (green). Image from Numenta.com.

A success story

The video then looks at some impressive results from an artificial retinal encoder that seems to accurately mimic the output of a natural retina, enabling high quality prostheses. The results from from this paper [9] by Nirenberg and Pandarinath. Nirenberg also gave a TED talk on the encoder [10]. It's looking likely that the retina will soon be well understood. Perhaps the information gateway to the brain will also become the gateway to our understanding of the brain.