Silicon Retina Chip Recognizes Motion Patterns

PORTLAND, Ore. — A cognitive function -- context-dependent classification of motion patterns -- has been implemented on a silicon-retina semiconductor chip by the Neuromorphic Cognitive Systems (NCS) group in the Institute of Neuroinformatics (INI) at the Eidgenössische Technische Hochschule (ETH) of Zurich.

The NCS was created to develop biomimetic semiconductors that emulate various neural functions. However, instead of merely crafting knee-jerk sensor-motor feedback loops that always react in the same way, the group is attempting to insert context-dependent cognitive functions into the loop using neuromorphic mixed-signal analog/digital semiconductor chips that adapt each response to a given situation. Like IBM's recent cognitive computing efforts, the NCS group is likewise using digital voltage spikes in event-driven inter-neuron communications topology.

The group's goal, according to its leader, professor Giacomo Indiveri at the University of Zurich, is to compensate for the inherent imprecision of noisy, leaky biological neurons by crafting a systematic strategy for mirroring cognitive behaviors at a higher level. The group believes it has found an effective methodology that works by first mapping the noisy low-level circuitry to an abstract computational layer of "model" semiconductor neurons resulting in what it calls a "soft state machine."

In the silicon retina example, its noisy neuronal bias voltages are mapped to the abstract neuron's model parameters using population-activity measurements. The abstract computational layer then reliably performs the processing steps of a cognitive function -- here, context-dependent classification of motion patterns. Because the soft state machine can be accurately modeled, a hardware synthesis method can be used to automatically interconnect the abstract neurons in the proper configuration. Now the group is working to expand its technique to other neuromorphic chips besides silicon retinas, such as silicon cochlea (inner ear).

Regarding using smart silicon retinas in humans, that would probably be a decade or more away, but in the shorter term the fact that cognitive functions can be built-in should make human-like perception possible for robots.

Regarding power dissipation it should be drastically less, because artificial neurons dissipate very little power except when firing voltage spikes, which typically only occurs every few hundred milliseconds.

Don't have any numbers for silicon retinas, but the long-term goal IBM's cognitive computers is to simulate corelets on a supercomputer that consumes mega-Watts, then execute them on a cognitive computer that consumes kilo-Watts--1000-to-1 less power.