The document is intended to make commander think through the implications of their new artificial-intelligence tools.

The Defense Department will unveil a new artificial intelligence strategy perhaps as early as this week, senior defense officials told Defense One. The strategy—its first ever—will emphasize the creation and tailoring of tools for specific commands and service branches, allowing them to move into AI operations sooner rather than later.

“DOD has spent the past 50 years treating AI as a [science and technology] concern. This strategy reflects an additional imperative, which is to translate the technology into decisions and impact in operations,” said one official with direct knowledge of the strategy.

While the military has long played a key role in AI development by helping to fund the design of programs such as Apple’s Siri, the Department is preparing for its future by trying to learn from private industry — particularly “the small number of companies that do this well,” the official said. “We’ve studied their path to success and we’ve gleaned lessons from … those in industry who are on a journey to transform themselves.”

Safety First

One thing the new strategy won’t do, the official said, is alter the military’s philosophy on using autonomous systems in combat. Under a 2012 directive, a human operator must always be available to oversee and override an autonomous weapon’s actions.

Indeed, one reason the new strategy will focus on more immediate, operational applications of AI—as opposed to more theoretical applications in the far future—is to force operators and commanders to think through safety and ethical implications as they figure out what they want AI to do for them. “When you translate the tech into impact in operations, you have to think more seriously about ethics and safety,” said the official. Safety “is a major focus and the language [of the strategy] is very clear about how important we consider these topics.”

The Pentagon’s AI ambitions suffered a setback—or at least the appearance of one—last spring when hundreds of Google employees petitioned the company to end work on Project Maven.

Since then, the Pentagon has taken a number of steps to appease Silicon Valley employees and executives who are skeptical, or hostile to the notion of, helping the military harness AI. The department is drafting a set of ethical principles and is reaching out to academia and ethicists to help.

In December, a group of ethicists affiliated with New York University’s AI Now Institute published a white paper spelling out the reservations many programmers and technologists have with the military use of AI. "Ethical codes can only help close the AI accountability gap if they are truly built into the processes of AI development and are backed by enforceable mechanisms of responsibility that are accountable to the public interest,” it reads. One Google employee involved in the company’s Project Maven protest told Defense One that a list of ethical principles would be unlikely to persuade the individual that partnering with the military would be a good idea.

In other words, the Department still has lots of convincing to do.

The strategy will also focus on building what the official described as an AI-ready workforce. The official pointed to the success of massive open online courses (augmented by live classroom instruction) and AI boot camps such as Google’s Machine Learning Ninja camp as private-sector examples of how to quickly train people to work on machine learning. “Here again we’re looking to those companies that either succeeded in systematically transforming their workforce industry,” the official said.