According to the DARPA news release, “Humans perform a wide range of visual tasks with ease, something no current artificial intelligence can do in a robust way. They [humans] have inherently strong spatial judgment and are able to learn new spatiotemporal concepts directly from the visual experience.”

According to DARPA, the joint military community anticipates a notable increase in the role of UVG’s in support of future operations but they concede such a rollout would not be a force multiplier because they require humans to evaluate the data.

A d v e r t i s e m e n t

{openx:49}

“A machine-based implementation of such abilities is broadly applicable to a wide range of applications, including ground surveillance.” DARPA announced.

In a system like Mind’s Eye the analysis can be done simultaneously utilizing an artificial visual intelligence which would result in the delivery of usable data in real-time.

DARPA has contracted with 12 research teams to develop the machine-based intelligence including JPL and Carnegie Mellon.

“These teams will develop a software subsystem suitable for employment on a camera for man-portable UGVs, integrating existing state of the art computer vision and AI while making novel contributions in visual event learning, new spatiotemporal representations, machine-generated envisionment, visual inspection and grounding of visual concepts.”

Three other teams have been tasked to develop system integration concepts: General Dynamics Robotic Systems, iRobot and Toyon Research Corporation. Working collaboratively, these teams will incorporate newly developed visual intelligence software into a camera package suitable for a man-portable UGV.