Technology’s dilemmas: Are we wired to respond?

Over the past two decades, Peter W. Singer, strategist and Senior Fellow at the New America Foundation, has tracked the evolution of technology in the battlespace, authoring a series of books from Wired For War to Cyber Security and Cyberwar that have capture the challenges modern militaries face accepting and confronting game-changing capability.

This June he will publish Ghost Fleet: A Novel of the Next World War, an exploration of some of the themes that could shape our next major conflict. The keynote speaker at the upcoming Kingston Conference on International Security, which will address robotics in military operations, he recently spoke with editor Chris Thatcher.

What’s the premise of Ghost Fleet?

The book is a look at what would happen if the brewing cold war with Russia and China ever were to turn hot. It is framed in terms of that scenario but the research for it is really about wrestling with what are the key technologies and trends that might shape future wars. It ranged from gathering information on the latest Chinese unmanned systems prototypes to U.S. Navy electromagnetic railgun, to vulnerabilities that are being baked into the Joint Strike Fighter, to interviews with the people who would fight in such a war, from U.S. navy ship captains, to fighter pilots, special operators, Chinese generals, and Anonymous hackers.

To borrow from Homer Simpson’s comment on alcohol, is technology the cause of, and solution to, all of the military’s problems?

It has become fashionable recently for leaders to argue that one of the lessons of the last decade of war is that, as one U.S. military four-star put it to me, “technology doesn’t matter in the human-centric wars we fight.” That assumes a definition of technology as something that is exotic and unworkable. I like to paraphrase musician Brian Eno, who essentially said, technology is the name we give to things that we don’t use every day. If we use it every day, we don’t call it technology any more. Whether it is a stone or a drone, it is simply a tool that we apply to a task.

I think the fast pace of technologic change is our biggest challenge right now. Technology has encapsulated everything from Moore’s Law, when we’re talking about computing power chips, to the Law of Accelerating Returns, which considers technology’s impact on everything from business to battlefield rifles. And we see that its advance is not linear, it is exponential; it’s not additive, it is multiplying upon itself. What we are seeing are all sorts of new technologies coming at us faster and faster with greater and greater power, and that is particularly hard for organizations that are conventional in their form and design – and sometimes their thinking – to adapt to.

There is a vast array of technologies that are game-changers, but while that gives you capabilities that you could not have imagined a generation earlier, it also serves up dilemmas you wouldn’t have imagined either. Those dilemmas are everything from questions of tactics, doctrine, organizational design and recruiting, to law and ethics.

This technological change cuts across every service silo, every government department. Is there a culture change required as well? Is new thinking needed to understand how/where these technologies can best be applied?

Absolutely. We wrestle with that in the book. In cyber conflict, for example, who is the ideal cyber warrior and in what kinds of units will he or she be organized? Is it classic military command where somebody is trained up through the military? Or is it a 17-year-old hacker who is in a cyber militia? What will a group like Anonymous do in this conflict space? Or, if you are looking at small unit tactics, how will an individual rifleman operate? What is the proper size of their unit? How about for an a counter insurgent: is it always going to look the same as it is right now?

Is the current defence industry organized and structured in a manner that it would actually be effective and useful in a major state-on-state war? In the U.S., we turned to Detroit to serve as the arsenal of democracy in the last world war. Could our current defence industry fill that role? Or would we turn to Silicon Valley? And in turn, how would Silicon Valley respond?

We could go on and on. We assume stability in our own structures, in our current technologies, or in what our adversaries are trying to do to us. But that is not set in stone.

I’ve had a vision of a poor colonel trying to herd a room full of Sheldon Coopers, all of whom think they are much smarter than he is and are all questioning his orders.

That is not just a theoretic or funny question for the future. We are wrestling with this right now. How do you design your Reserve system for cyber? The U.S. model, which is essentially to take a new capability and put it into an old box? We’ve taken the National Guard Reserves and said, ok, now you are all cyber. Or is it the Estonian model where they have in essence a militia for their cyber defence corps? Is it the fuzzy proxy actor model that China has with its cyber militia? These are real questions right now that are uncomfortable to talk about.

What do robotics and artificial or machine intelligence do to the speed of decision making? Militaries insist there will be a man in the loop when it comes to unmanned or semi-autonomous systems but does that man, in fact, become the weak link because he can’t make decisions fast enough to respond?

It both exponentially speeds up, but also time still matters. Let me explain that Buddha sounding statement. In whatever domain of conflict you care about, the OODA loop (observe, orient, decide, act) is shrinking to almost infinitesimal scale. In cyber conflict, for example, the speed is digital speed and the human role is moved into a managerial one. It is the same when we look at something like air defence: 30 different nations have systems that are somewhat autonomous in that they automatically shoot down incoming rocket or mortar fire. When you have speed combined with the diversity of data that is coming in, the model of the commander on a bridge of a warship receiving information and delegating responses – a model that has been the way of naval warfare for hundreds of years and it is equally the vision of the Starship Enterprise – that, frankly, may not be possible in real conflict moving forward.

This also has a legal edge to it. The positive role that lawyers have played in putting in a series of deliberations for rules of engagement when it comes to airstrikes and use of artillery, that has worked because we have had the luxury of time in most instances. The other side has not contested us in the air, at sea, in space or in cyber space. We may not have that luxury moving forward.

The point is, yes, the speed is picking up, but the shear mass of data flows coming at you mean that the human role will certainly change and it may move all the way up to commanders.

That’s not to say time doesn’t matter. When we look at corporate experiences with cyber breaches, what determines their success or failure, what determines the cost of the breach, is shaped not by what happened in the moment of the breach but by the decisions they made months and weeks beforehand and in the days afterward in how they dealt with the breach.

In cyber, where there is not an obvious smoking cloud over a blown up building, sometimes your best response may be, to quote the famous military theorists Taylor Swift, “shake it off” and act like it didn’t hit you and leave the other side in question, or study the weapon that they’ve sent, close up your vulnerabilities, learn from that weapon, and then deploy it back. In cyber there is a back and forth of generations of technology, so you want to control the pace of that. So, yes, the decision cycle is being compressed, but there is still a human interaction that is often political or strategic in nature.

We’ve heard a lot in recent years about the strategic corporal. You raise the prospect of the tactical general who now has greater access to all aspects of the battlefield. What does technology of this nature do to clarity of command?

The strategic corporal is the idea of younger and younger troops making decisions that can shape the outcome of the war. The flipside that we don’t like to talk about is the tactical general. It’s the ability of the technology to allow leaders to reach down into the battlefield and not just monitor what is happening at a specific level, but actually make decisions – the shoot or not shoot decision. We have seen it in different ways in Iraq 2.0, Iraq 3.0 and Afghanistan. And we see it both from military and political leaders.

I recall a four-star proudly telling me that he personally decided what size bomb should be on a certain mission, whether to strike or not strike a certain compound based on what he saw in the Predator drone feed that he’d watched for multiple hours in his command centre. He was saying this to show that he was taking personal responsibility, that if something went wrong he would be held accountable, but there was also a little bit of, I’m smarter than everybody else. He wasn’t trained as a JAG or as a targeting officer, but even if he got it right, he’s doing someone else’s job and that person can’t do the general’s job. We’ve heard pilots in Iraq 3.0 say they have never been more frustrated in their careers because of how targets are being vetted all the way up to the level of the White House.

What happens when the network goes down and you have a generation that has been trained this way and there’s always someone above watching and taking over the decisions. What if it’s not like that?

Do you have a sense of what this technology is doing to doctrine, how it is being adapted to this environment?

Earlier today I actually met with a group from the British military wrestling with that very question. Everybody is wrestling with it, including the Russians and the Chinese. One of the lessons I have drawn from my work is that there are many echoes of the period surrounding the First World War. There were a whole series of science fiction-like technologies becoming real and being introduced into war, whether it was the tank, the aeroplane or the submarine, and they provoke all sorts of tough questions for everything from tactics to strategy, to law, to ethics, to doctrine.

Doctrine is not just about what you do, but who does it and what status do they have within that military. So the debate back then over armoured doctrine was not just about the best way to use the tank, it was about what it meant for horse cavalry and the infantry.

We have a similar process going on right now, whether it is with cyber warfare or with the introduction of unmanned systems. I would argue that the lesson from that period that will perhaps hold true is that the winner will not necessarily be the first to get the technology, or the one with the best technology, or the most of the technology; it will be the one who is able to package it together with the correct doctrine. Building and implementing new doctrine is messy and incredibly painful and you don’t know whether it has really worked or not until the worst possible moment.

Has the success of UAVs changed how commanders view this technology?

Yes and no. It is a technology that was once looked at as science fiction more than science experiment. Commanders once asked what it was good for and now it’s the most in-demand system for ISR and strike capabilities, whether it’s ground troops in Afghanistan or theatre commanders in the Pacific.

That said, I was recently chatting with someone who described a battle going on right now between manned and unmanned in future acquisitions, and he noted that manned is winning. He went down a long list of examples of the next generation of weapon systems, such as combat aircraft, where we had a choice and we kept choosing manned again and again. And the reason was not based solely on the capability of the system, but was invariably shaped by everything from organizational culture to big business, to promotions.

This is not unlike the debates in the 1930s over horse cavalry versus mechanized or battleships versus aircraft carriers. You also have Clayton Christiansen’s idea of the innovator’s dilemma, that the first generation of a technology, even when it is a game-changing technology, is invariably the worst of it and has a hard time displacing what is already there. That is the dilemma big organizations typically get caught in and why they get their lunch eaten by some upstart. The question is, is that going to be our militaries?

Is this generational change?

It may be generational but it also points to organizational structures.

In conversations you have had for your books, is there an emerging view as to how we should manage the ethics of robotics, of space or cyber?

I think at this stage we want to identify the technologies that are no longer science fiction – don’t deny them but don’t overplay them – so that we are scientifically informed. And second, we want to have a debate that is ethically informed – not the ethical version of the ostrich with its head in the sand as we saw with past generations of game-changing technology like atomic weapons. We can’t ignore the scientific, the technological possibilities or the legal and ethical aspects. But we have to remember that this debate is taking place within the realm of conflict where both sides get a vote.

I’m reminded that there are a lot of things that we say we will never do, yet we change our mind based on what other actors do or how we are doing in a conflict. I use the illustration of unrestricted submarine warfare. It was once a science fiction technology, then prior to WWI Arthur Conan Doyle writes a short story warning about the risk of an enemy conducting a submarine blockade of Great Britain. The Admiralty goes public to mock him for this outrageous, silly idea, saying roughly: no nation would choose to do this. In fact, if any captain did so their own navy would put them up against a wall and shoot them. Just a few months later, war breaks out and Germany conducts unrestricted warfare. And unrestricted submarine warfare is considered so against the norms of the day that it is the primary reason why the U.S. decides to enter WWI a couple of years later. Move forward to 1941 and the attack on Pearl Harbor: it took us five hours to change our mind on unrestricted submarine warfare. Five hours after the attack, the order goes out to the entire U.S. fleet: conduct unrestricted submarine warfare against the Japanese, and no one questions it.

Advertisement

Upcoming Events

C4ISR and Beyond is a one day conference that brings together stakeholders from industry, military and government to discuss lessons learned, share technological advancement and its impact within the industry.

Advertisement

Advertisement

Opportunities Powered by OMX

About

Vanguard is Canada’s oldest trade journal of record that provides a forum for Canada’s security and defence community, discussing strategic perspectives and overviews of government and military policy and practice, through interviews with leading practitioners and contributions from renowned experts, including representatives from industry.