[Editor’s Note: Now that another month has flown by, Mad Scientist Laboratory is pleased to present our June edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

I know — I cheated and gave you two articles to read. These “dueling” articles demonstrate the early state of our understanding of the role of humans in decision-making. The Harvard Business Review article describes findings where human – Artificial Intelligence (AI) partnerships take advantage of the leadership, teamwork, creativity, and social skills of humans with the speed, scalability, and quantitative capabilities of AI. This is basically the idea of “centaur” chess which has been prevalent in discussions of human and AI collaboration. Conversely, the MIT Technology Review article describes the ongoing work to build AI algorithms that are incentivized to collaborate with other AI teammates. Could it be that collaboration is not a uniquely human attribute? The ongoing work on integration of AI into the workforce and in support of CEO decision-making could inform the Army’s investment strategy for AI. Julianne Gallina, one of our proclaimed Mad Scientists, described a future where everyone would have an entourage and Commanders would have access to a “Patton in the Pocket.” How the human operates on or in the loop and how Commanders make decisions at machine speed will be informed by this research. In August, the Mad Scientist team will conduct a conference focused on Learning in 2050 to further explore the ideas of human and AI teaming with intelligent tutors and mentors.

Dan Brown’s famous symbologist Robert Langdon returns to avenge the murder of his friend, tech developer and futurist Edmund Kirsch. Killed in the middle of presenting what he advertised as a life-changing discovery, Langdon teams up with Kirsch’s most faithful companion, his AI assistant Winston, in order to release Edmund’s presentation to the public. Winston is able to access Kirsch’s entire network, give real-time directions, and make decisions based on ambiguous commands — all via Kirsch’s smartphone. However, this AI system doesn’t appear to know Kirsch’s personal password, and can only enable Langdon in his mission to find it. An omnipresent and portable assistant like Winston could greatly aid future warfighters and commanders. Having this scope of knowledge on command is beneficial, but future AI will be able to not only regurgitate data, but present the Soldier with courses of action analyses and decision options based on the data. Winston was also able to mimic emotion via machine learning, which can reduce Soldier stress levels and present information in a humanistic manner. Once an AI has been attached to a Soldier for a period of time, it can learn the particular preferences and habits of that Soldier, and make basic or routine decisions and assumptions for that individual, anticipating their needs, as Winston does for Kirsch and Langdon.

Mad Scientist Laboratory readers are already familiar with the expression, “warfare at machine speed.” As our adversaries close the technology gap and potentially overtake us in select areas, there is clearly a “need for speed.”

“… speed matters — in two distinct dimensions. First, autonomy can increase decision speed, enabling the U.S. to act inside an adversary’s operations cycle. Secondly, ongoing rapid transition of autonomy into warfighting capabilities is vital if the U.S. is to sustain military advantage.” — Defense Science Board (DSB) Report on Autonomy, June 2016 (p. 3).

In his monograph, however, author and former Clinton Administration Secretary of the Navy Richard Danzig contends that “superiority is not synonymous with security;” citing the technological proliferation that almost inevitably follows technological innovations and the associated risks of unintended consequences resulting from the loss of control of military technologies. Contending that speed is a form of technological roulette, former Secretary Danzig proposes a control methodology of five initiatives to help mitigate the associated risks posed by disruptive technologies, and calls for increased multilateral planning with both our allies and opponents. Unfortunately, as with the doomsday scenario played out in Nevil Shute’s novel On the Beach, it is “… the little ones, the Irresponsibles…” that have propagated much of the world’s misery in the decades following the end of the Cold War. It is the specter of these Irresponsible nations, along with non-state actors andSuper-Empowered Individuals, experimenting with and potentially unleashing disruptive technologies, who will not be contained by any non-proliferation protocols or controls. Indeed, neither will our near-peer adversaries, if these technologies promise to offer a revolutionary, albeit fleeting, Offset capability.

This article illustrates how the Pentagon’s faith in its own technology drove the Department of Defense to trust it would maintain dominance over the electromagnetic spectrum for years to come. That decision left the United States vulnerable to new leaps in technology made by our near-peers. GEN Paul Selva, Vice Chairman of the Joint Chiefs of Staff, has concluded that the Pentagon must now keep up with near-peer nations and reestablish our dominance of electronic warfare and networking (spoiler alert – we are not!). This is an example of apink flamingo (a known, known), as we know our near-peers have surpassed us in technological dominance in some cases. In looking at technological forecasts for the next decade, we must ensure that the U.S. is making the right investments in Science and Technology to keep up with our near-peers. This article demonstrates that timely and decisive policy-making will be paramount in keeping up with our adversaries in the fast changing and agile Operational Environment.

Researchers at MIT have discovered a way to “see” people through walls by tracking WiFi signals that bounce off of their bodies. Previously, the technology limited fidelity to “blobs” behind a wall, essentially telling you that someone was present but no indication of behavior. The breakthrough is using a trained neural network to identify the bouncing signals and compare those with the shape of the human skeleton. This is significant because it could give an added degree of specificity to first responders or fire teams clearing rooms. The ability to determine if an individual on the other side of the wall is potentially hostile and holding a weapon or a non-combatant holding a cellphone could be the difference between life and death. This also brings up questions about countermeasures. WiFi signals are seemingly everywhere and, with this technology, could prove to be a large signature emitter. Will future forces need to incorporate uniforms or materials that absorb these waves or scatter them in a way that distorts them?

A study performed by the University of Maryland determined that people will recall information better when seeing it first in a 3D virtual environment, as opposed to a 2D desktop or mobile screen. TheVirtual Reality (VR) system takes advantage of what’s called “spatial mnemonic encoding” which allows the brain to not only remember something visually, but assign it a place in three-dimensional space which helps with retention and recall. This technique could accelerate learning and enhance retention when we train our Soldiers and Leaders. As the VR hardware becomes smaller, lighter, and more affordable, custom mission sets, or the skills necessary to accomplish them, could be learned on-the-fly, in theater in a compressed timeline. This also allows for education to be distributed and networked globally without the need for a traditional classroom.

This book is fascinating for two reasons: 1) It utilizes one of the greatest science fiction series (almost a genre unto itself) in order to brilliantly illustrate some military strategy concepts and 2) It is chock full of Mad Scientists as contributors. One of the editors, John Amble, is a permanent Mad Scientist team member, while another, Max Brooks, author of World War Z, and contributor, August Cole, are officially proclaimed Mad Scientists.

The book takes a number of scenes and key battles in Star Wars and uses historical analogies to help present complex issues like civil-military command structure, counterinsurgency pitfalls, force structuring, and battlefield movement and maneuver.

One of the more interesting portions of the book is the concept of ‘droid armies vs. clone soldiers and the juxtaposition of that with the future testing of manned-unmanned teaming (MUM-T) concepts. There are parallels in how we think about what machines can and can’t do and how they think and learn.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

Collaborate

Join in the discussion… select the post title, scroll to the bottom of the screen, and enter your thoughts / observations via the “Leave a Reply” comment box.

Subscribe

Enter your non-DoD email address to subscribe to the Mad Scientist Laboratory and receive notifications of new posts by email.

Join 533 other subscribers

Email Address

Search

Search for:

More Mad Scientist content…

Special thanks to Small Wars Journal for continuing to graciously host Mad Scientist content here.