Self-Aiming Camera

Researchers at the Beckman Institute's AI Group and Robot Vision
Lab have developed a new camera
system that uses both audio and visual cues to point itself toward
"interesting" activity. The camera is controlled by a neural network
that simulates the superior colliculus of the human brain. A wide angle
camera and two microphones are used to localize activity and a second
camera with a tighter view is then directed towards the desired area.
The system contains a database of sounds to help it determine what types
of sound are interesting and, over time, it learns to associate sounds
with the visual targets that make them.