Surveillance Robots Share and Interpret Images

A new system that allows a team of robots to share and interpret information as they move around could enable these same robots to relieve humans of dangerous jobs such as disposing of landmines, cleaning up after a nuclear meltdown or surveying the damage after a flood or hurricane.

Seeing the same area from many points of view could be confusing to a human, but a computer can manage it and combine all the information to build a "model" of the scene and track objects and people from place to place. Courtesy of Cornell University, College of Engineering.
Researchers from Cornell University have developed the system, which would allow robots to conduct surveillance as a single entity with many eyes.

"Once you have robots that cooperate, you can do all sorts of things," said Kilian Weinberger, associate professor of computer science, who is collaborating on the project with Silvia Ferrari, professor of mechanical and aerospace engineering, and Mark Campbell, professor of mechanical engineering.

Their work, "Convolutional-Features Analysis and Control for Mobile Visual Scene Perception," is supported by a four-year, $1.7 million grant from the U.S. Office of Naval Research.

Using computer vision to match and combine images of an area with several cameras, identify and track objects and people from place to place, the groundbreaking research will fuse information from fixed cameras, mobile observers and outside sources.

Mobile observers could include autonomous aircraft, ground vehicles and humanoid robots wandering through a crowd. The images will be sent to a central control unit that will have access to other cameras looking at the region of interest, as well as access to the internet for help in labeling what it sees.

The researchers plan to test their system on the Cornell campus, using research robots to "surveil" crowded areas while drawing on an overview from existing webcams. Their work might lead to incorporating the new technology into campus security.