Artificial Intelligence Could Help Spot Hidden Bombs

“Let’s go get blown up,” said Staff Sergeant Ashley Hess, platoon sergeant for 2nd Platoon, Able Troop, 3rd Squadron, 71st Cavalry. Hess, pictured at center, climbed into the cab of his Mine-Resistant Ambush-Protected armored truck with his driver and two cavalry troopers. It was October 16 in Baraki Barak, a district of Afghanistan south of Kabul. Hess and his platoon would be driving down a dirt road codenamed “Route New York,” on a mission to survey some local farmers.

Route New York is notoriously dangerous. Trees on both sides hide insurgent activity. Deep culverts make perfect hiding places for roadside bombs. The dirt road itself is soft enough that insurgents can bury bombs directly in the paths of American vehicles. For Able Troop, it’s not a matter of whether they will be blown up on Route New York — it’s a matter of exactly when and where.

One group of researchers is trying to take some of the surprise out of Improvised Explosive Devices. Paulo Shakarian, a computer science Ph.D. student, and Professor V.S. Subrahmanian — both of the University of Maryland — along with their colleague Maria-Luisa Sapino of the University of Turin in Italy, have developed software that they say can predict the locations of hidden caches of explosives, by combining on-the-ground intelligence data with computer models that mimic group behavior. In other words, a computer that thinks like an insurgent bomber.

The “Spatio-Cultural Abductive Reasoning Engine” software, “SCARE” for short, uses what’s called “abductive logic,” as opposed to the more widely-known “deductive” type. Deductive reasoning starts with related facts and projects to the next, certain fact. Abductive reasoning is more like a “hunch” that might explain otherwise random data. In writing software that can use abductive reasoning, Shakarian, Subrahmanian and Sapino have, in essence, created an Artificial Intelligence with a degree of intuition.

“The SCARE software is not a stand-alone tool,” Subrahmanian warned. “Military commanders and intelligence analysts would use SCARE in conjunction with their own experience and knowledge of a region, and together with available intelligence to pinpoint likely cache locations.”

According to a University of Maryland press release, the researchers tested SCARE by telling the program to predict where insurgents might stash their bombs in Baghdad for the next 21 months. They compared the software’s predictions to public data on actual bomb-cache discoveries in Baghdad for a past 21-month period, and found that the computer was usually within half a mile of pinpointing a cache.

More broadly, some of the algorithms underpinning SCARE can be used to build entire simulated worlds that do more than just mimic bombers’ behavior. These worlds, if accurate enough, could allow military commanders to “tour” a battle zone before deploying, in order to understand the cultural dynamics they might face among the local populace. In a recent article, Subrahmanian highlighted so-called “Stochastic Opponent Modeling Agents,” a form of AI that crunches data on community groups in order to write rules for how that group behaves in certain situations.

Subrahmanian rightly describes future users of such systems as potentially “skeptical” of the accuracy of computer models.

Fortunately for developed nations, the rapid development of high-fidelity, military-grade AI dovetails with the emergence of a class of developed-world soldiers uniquely suited to using computerized systems. According to Missy Cummings, a researcher at the Massachusetts Institute of Technology, people who play video games are better at controlling groups of highly-autonomous military robots than non-gamers.

“It appears that the video gamers in this study were able to self-regulate their pace so that they were able to effectively divide their attention,” Cummings wrote. “Video gamers in this study seemed to have more patience and were willing to allow the automation to do its job, which ultimately ended in better performance.”

Trust is a major factor when combining thinking people and thinking computers in the same system. People must trust the AI, in order for the pairing to be most effective. Again, gamers are more likely to trust a computer and quickly reach “consensus” with the machine, Cummings wrote. “Higher degrees of consent were both associated with better performance as well as video game experience.”

Intuitive AI might help future soldiers locate the bomb caches that plague soldiers like Hess in Afghanistan. And the video games might help prepare the soldiers for that human-computer partnership.