US military builds a mini-Skynet

The US military has confirmed that it is conducting "basic research" related to hierarchical machine perception and analysis.

Potential applications are likely to include visual, acoustic and somatic sensor processing for the detection and classification of objects or activities.

"The quantity of data available to DoD commanders and analysts from new sensor platforms with improved resolution and range poses tremendous challenges," explained DARPA spokesperson Tony Falcone.

"This [information] must be quickly and correctly analyzed, currently by highly trained human operators. As sensor capabilities expand, sophisticated, powerful machines with the ability to replicate, and even surpass, human perceptual capabilities will be required."

According to Falcone, such advanced requirements have prompted DARPA to carefully study recent machine learning "breakthroughs" in the context of its Deep Learning program.

"[Remember], the human visual system uses six layers of cortical processing, in addition to all of the preprocessing done by the retina and the lateral geniculate nucleus. [But] the neural net-based machines we use today generally [only] have two or three layers.

"[Now], Deep Learning isn't a biomimetic program, but if we believe that biological systems exhibit an economy of complexity, this suggests that we need to go deeper and have more layers; we are just beginning to understand how to do that."

Indeed, as Falcone notes, Deep Learning research could eventually allow neural-nets to achieve "human-level or better" analysis of video and other sensor modalities.

"Deep Learning should enable commanders to make more informed decisions faster by ensuring that subtle, yet critical correlations that may exist in very large collections of data are uncovered, explored and analyzed.

"The result is that data sources are being used more effectively, yielding greater confidence in the reliability of the information on which subsequent command decisions are made," he added.