How artificial intelligence went from an advantage to a worldwide threat

National security leaders from the NGA, NSA and DIA, among other agencies, have come to view artificial intelligence as a worldwide threat. (Master Sgt. Barry Loo/Air Force)

Each spring, the nation’s top intelligence officials make a pilgrimage to Capitol Hill to solemnly discuss the threats the United States faces.

What emerged following a pair of hearings in the House and Senate and hours of testimony is a laundry list of terror and a detailed accounting of exactly what keeps military leaders up at night. This year’s presentations also revealed a strategic shift as the Pentagon and intelligence agencies prioritize threats from China and Russia over terrorism for the first time since 2001.

“We will continue to prosecute the campaign against terrorists that we are engaged in today, but great power competition, not terrorism, is now the primary focus of U.S. national security,” Jim Mattis, the secretary of defense, said in January 2018 while unveiling a new national defense strategy.

For many years, machine learning and artificial intelligence have been held up as one answer to preserving U.S. military superiority. But now, with other nations making significant investments in that technology, it’s easier for peers to make sense of the copious data and sensors in the field. With that benefit, nation states can shrink the decision space and create actionable decisions faster than before.

“What’s changed is the world around us and now within us,” Robert Cardillo, director of the National Geospatial-Intelligence Agency, told the Senate Intelligence Committee during a Feb. 13 hearing. “What we used to hold exclusively — because we had capabilities that others didn’t — is now more shared.”

What became clear from this year’s hearings is that military leaders are now universally holding up artificial intelligence as a threat to U.S. military operations.

Because many of the tools in the computing world that make analysis possible are available commercially — and thus available to all — operational concepts will be key as these modern capabilities become available worldwide, said Lt. Gen. Robert Ashley, director of the Defense Intelligence Agency, during the same hearing.

Adversaries taking advantage

National security officials have listed China and Russia as top threats, indicating they have made tremendous technological

progress. China, for example, has a national strategy designed to harness the power of artificial intelligence. These experts say China is using AI in offensive and defensive cyber operations, as well as for intelligence, surveillance and reconnaissance. Russia is also experimenting with AI for drone swarms and electronic warfare.

“We’re seeing all of our near-peer competitors invest in these kinds of technologies because it’s going to get them to decision cycles faster, allow them to digest information in greater volumes and have a better situational awareness of what’s happening in the battlespace” and in some cases what’s happening in the strategic environment, Ashley said.

Adm. Michael Rogers, director of the National Security Agency, told the Intelligence Committee that five or 10 years ago he’d look at data sets and think they were so large there was no way an adversary could generate insights from them. Those days are over.

“I don’t have those kinds of conversations anymore,” he said. “With the power of machine learning, artificial intelligence and big-data analytics, data concentrations now increasingly are targets of attraction to a whole host of actors. You have watched [China] and others engage in activity designed to access these massive data concentrations.”

Former Deputy Secretary of Defense Bob Work has warned that potential competitors have reached parity with the United States in battle networks. These networks can include sensor grids that look at what is happening in theater, intelligence grids that make sense of what’s happening and determining desired effects, effects grids to make those goals happen, and logistics and support grids that keep the whole operation running.

Increasing manpower is no longer the answer to these military problems.

“We’re not going to industrial age out of this. ‘Well, it’s just hire 10,000 more people.’ That’s not a sustainable strategy,” Rogers told the Senate Armed Services Committee during a Feb. 27 hearing. “Among the things we’re looking at — and we’re not the only ones — is how can you apply technology to overcome the human capital piece.”

Technological solutions must augment, not replace humans. Work has commonly described this as the Iron Man, not Terminator, solution. “I tend to think of it more as Iron Man where it’s human-machine collaboration — where artificial intelligence helps Tony Stark make decisions and augments his capabilities,” Work said in a January podcast.

Perhaps the best example of this is using machines to ferret out meaningful intelligence in gigantic data sets. Information is power, except when there is so much to sift through it is nearly impossible to do so and make actionable decisions in any timely manner.

“We are now at the point for the first time I can remember where [processing, exploitation and dissemination] is the shortfall more so than the platforms themselves,” Lt. Gen. Jack Shanahan, director of defense intelligence for war-fighter support, said during an event in March 2017.

Machines can analyze data sets and video feeds in nanoseconds and point out critical elements for humans. Staffers can then spend their time on analysis of intelligence to enhance and inform decision making on the battlefield.

“Existing machine-learning technology, for example, could enable high degrees of automation in labor-intensive activities such as satellite imagery analysis and cyber defense,” according to the intelligence community’s 2018 Worldwide Threat Assessment.

Cardillo has been aggressive in pursuing automation and machine-learning technologies to help unburden analysts. He described during a 2016 congressional hearing his frustration with human-machine teaming at NGA, providing an example in which an analyst physically counted 25,000 buildings in a particular area as there was no algorithm designed to do this for them. “Think of the hours he had to spend to do that,” Cardillo exclaimed. “I turned to my head of research and I said: ‘Don’t let that happen again.’ ”

NGA, as a result, has been trying to get to 75 percent automation and 25 percent analytical to allow analysts to tackle hard problem sets and provide anticipatory intelligence.

How is DoD solving the problem?

The White House’s budget request for fiscal 2019 makes clear the Pentagon is looking to spend on several programs related to AI, machine learning and automation as a means of increasing decision speed.

Line items from the Pentagon’s FY19 budget request point to funding asks in a wide array of AI applications. These include training and war gaming, combat systems, and robotics.

A few examples include an $87 million request for Air Force experimentation to put operational AI in simulations and field experiments; a $49 million ask for a Navy prototype development program to focus on innovative combat system technologies; a $7.1 million ask for a Marine Corps program to develop drone swarming technology that can fuse AI to enhance situational awareness; and an Army program seeking $4.6 million to improve robots’ perception of their environments.

Other programs remain underway. On a conceptual level, Work created the so-called Third Offset strategy, which sought to counter the advancements in recent years of competitors’ theaterwide C4I grids and leveraging human-machine teaming through artificial intelligence.

Current officials have acknowledged that the notion behind the Third Offset lives despite no direct or public reference to the moniker. Another government program, Project Maven, aims to accelerate the Department of Defense’s integration of big data and machine learning, first focusing on processing full-motion video in Iraq and Syria. As top adversaries invest in these capabilities, thereby increasing the pace of decision-making during wartime operations, leaders are adamant the U.S. must take this seriously.

“From a military perspective, my concern is you potentially lose speed and knowledge. That’s a terrible combination as a warrior,” Rogers told the Senate Armed Services Committee in response to a question asking what he fears if the U.S. doesn’t get this right.

“Speed and knowledge are advantages for us historically,” he then added. “One of my concerns is if we’re not careful, AI potentially gives opponents speed and knowledge better than ours if we’re not careful. I’m not arguing that’s going to happen but I acknowledge we’ve got to look at it.”