The military advantage in AI will go to the nation with the most strategically focused investments and the most comprehensive enabling environment. When the concepts for the DOD’s current Third Offset were being developed, strategists identified its greatest risk of failure: that the US military would enthusiastically but haphazardly adopt enabling technologies—especially artificial intelligence—in a “peanut butter spread,” instead of focusing investment on high priority areas. Recognizing this very real risk, the Center for Autonomy and AI at CNA developed a framework for strategic investment by the Navy in artificial intelligence. A new publication by the center recommends that the Navy develop a strategy and implementation plan for AI applications.

Significant advances in artificial intelligence over the past decade have changed our way of life, and the impacts of AI are only expected to accelerate. AI is increasingly discussed as a source of good in many areas, such as medicine, education, and law enforcement. At the same time, the idea of military applications of AI and the related attribute of autonomy has created considerable controversy. There are strong concerns about these technologies, even speculation that they could lead to the end of the world. This CNA report examines commonly held concerns as reported in the media or voiced in international gatherings.

Unmanned systems are proliferating at an explosive pace, and the U.S. Navy is increasingly employing and encountering them. Despite their many benefits, unmanned systems create the potential for unanticipated second-order effects. Of particular concern is the potential for unmanned systems to alter escalation dynamics between states, increasing risks to militaries and states. In light of this, U.S. Fleet Forces Command asked CNA to explore the impact of unmanned systems to escalation dynamics, focusing on how near-term unmanned systems (2017-2025) affect state-on-state competition in the maritime domain during shaping and deterrence operations.

This report examines the issue of human control with regard to lethal autonomy, an issue of significant interest in United Nations discussions in the Convention on Certain Conventional Weapons (CCW) forum. We analyze this issue in light of lessons and best practices from recent U.S. operations. Based on this analysis, we make the case for a wider framework for the application of human control over the use of force.

In the coming years, lethal autonomous weapons systems are likely to have a revolutionary impact on warfare. These weapons can attack targets based on artificial intelligence, without direct control by humans. Fear of their potential for unintended consequences has led to calls for a ban on autonomous weapons. What should the U.S. expect of such arms control negotiations? What role can the U.S. play in talks when autonomous weapons are at the core of the Third Offset plan to modernize U.S. national security?

The benefits, risks, and sensitivities surrounding autonomous weapon systems (AWS) demand an objective analytical approach to answer both whether and how the U.S. should explore the use of such systems. CNA analysis identifies that these technologies could bring important—potentially even critical—capabilities to the U.S. military, and singles out opportunities to explore AWS in lower-risk operational environments.

The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. However, unlike the last "sea change," during the Cold War, when advanced technologies were developed primarily by the Department of Defense (DoD), the key technology enablers today are being developed mostly in the commercial world. This study looks at the state-of-the-art of AI, machine-learning, and robot technologies, and their potential future military implications for autonomous (and semi-autonomous) weapon systems.

Although no one can predict how AI will evolve or how it will affect the development of military autonomous systems, we can anticipate many of the conceptual, technical, and operational challenges that DOD will face as it increasingly turns to AI-based technologies. We identified four key gaps facing DOD as the military evolves toward an "autonomy era": (1) a mismatch of timescales between the pace of commercial innovation and DOD's acquisition process; (2) an underappreciation of the fundamental unpredictability of autonomous systems; (3) a lack of a universally agreed upon conceptual framework for autonomy; and (4) a disconnect between the design of autonomous systems and CONOPS development.

CNA conducts analysis for the U.S. Navy, the Department of Defense (DOD), and other sponsors, ranging across policy, strategy, organizational processes, technical performance of military systems, and current operations. Because of the expected impact of autonomy and artificial intelligence (AI) to the character of warfare, CNA has created a Center for Autonomy and Artificial Intelligence to focus on these emerging technologies and their significant role in U.S. defense policy and all the military services. The Center combines CNA's strengths and experience in conducting objective analysis of U.S. military operations with focused expertise in autonomy and other aspects of AI.