Ground surveillance is a mission normally performed by human assets,
including Army scouts and Marine Corps Force Recon. Military leaders would
like to shift this mission to unmanned systems, removing troops from harmís
way, but unmanned systems lack a capability that currently exists only
in humans: visual intelligence. The Defense Advanced Research Projects
Agency (DARPA) is addressing this problem with Mindís Eye, a program aimed
at developing a visual intelligence capability for unmanned systems.

Humans perform a wide range of visual tasks with ease, something no
current artificial intelligence can do in a robust way. They have inherently
strong spatial judgment and are able to learn new spatiotemporal concepts
directly from the visual experience. Humans visualize scenes and objects,
as well as the actions involving those objects and possess a powerful ability
to manipulate those imagined scenes mentally to solve problems. A machine-based
implementation of such abilities is broadly applicable to a wide range
of applications, including ground surveillance.

The joint military community anticipates a significant increase in the
role of unmanned systems in support of future operations including jobs
like persistent stare. By performing persistent stare, camera-equipped
unmanned ground vehicles (UGVs) would take scouts out of harmís way. Such
a capability, however, would not constitute a force multiplier because
human analysts would have to interpret streaming video from these platforms
to detect operationally significant activities. A truly transformative
capability requires visual intelligence, enabling these platforms to detect
operationally significant activity and report on that activity so warfighters
can focus on important events in a timely manner.

DARPA has contracted with 12 research teams to develop fundamental machine-based
visual intelligence: Carnegie Mellon University, Co57 Systems, Inc., Colorado
State University, Jet Propulsion Laboratory/CALTECH, Massachusetts Institute
of Technology, Purdue University, SRI International, State University of
New York at Buffalo, TNO (Netherlands), University of Arizona, University
of California Berkeley and University of Southern California. These teams
will develop a software subsystem suitable for employment on a camera for
man-portable UGVs, integrating existing state of the art computer vision
and AI while making novel contributions in visual event learning, new spatiotemporal
representations, machine-generated envisionment, visual inspection and
grounding of visual concepts.

DARPA has also contracted with three teams to develop system integration
concepts: General Dynamics Robotic Systems, iRobot and Toyon Research Corporation.
These teams are taking a collaborative approach to developing architectures
incorporating newly-developed visual intelligence software onto a camera
suitable as a payload on a man-portable UGV.