Autonomous War – Which dangers are associated with warfare without human intervention?

The term autonomous war has been a controversial topic for years. But what exactly does the term actually mean? Autonomous war means the use of autonomous lethal weapons (short: LAWs) and machines or vehicles, which are primarily used by the military for modern warfare. Autonomous weapon systems can decide independently about life and death on the battlefield. However, autonomous weapon systems are more commonly known in the media as “killer robots”.

But when is a machine considered autonomous? In the context of the military, there are different levels of autonomy:

Human operated

Remote controlled

Semi autonomous / Human supervised

Fully autonomous

For a better understanding, these levels are explained in more detail below.

Human operatedThe machine is completely controlled by a person on site. For example every tank in World War I was human operated.

Remote controlled
The machine is completely remote-controlled by a person. An example of this are most commercially available drones.

Semi autonomous / Human supervised
The system can handle a wide range of tasks independently, such as patrolling in a predefined area or independently recognizing targets. However, a fire order must not be executed by such a system, the fire order must in any case be approved by a human being.

Fully autonomous
All actions intended for the system can be carried out independently (including the fire command) – the system learns on its own. However, a person can usually give new instructions or intervene at any time.
Except for a few exceptions, they do not officially exist or are not officially used yet.

In the following some systems already in use are briefly introduced, based on different levels of autonomy.

Uran-9 (Russia)
A primarily remote-controlled tank developed by Russia, which can also be used semi-autonomously. The tank drives a predefined route, recognizes and identifies targets automatically, but a fire command must be given manually. In semi-autonomous mode, it only follows the route given by its operator and cannot follow its own routes within an area.

Figure 1: Uran-9 showcase

It can be controlled over distances of up to 3 kilometres and it has a machine gun, anti-tank guided missiles and an anti-aircraft missile system. It is also equipped with a thermal viewing device, for example to identify snipers. According to media reports, this tank is currently stationed somewhere in Syria.

Low-cost UAV Swarming Technology (US Navy)LOCUST is a program of the US Army, which allows semi-autonomous drones to be fired as a swarm and commanded. Approximately 30 drones are fired from a kind of rocket ground station at once. They collapse with each other, exchange information and fly formations independently.

Figure 2: LOCUST in use

They will be used primarily for reconnaissance purposes, but will also be equipped with weapon systems in the near future.
Humans can intervene at any time and set the goals or select the target areas for clarification.

nEUROn (France)nEUROn is a demonstration program of the French manufacturer Dassault Aviation, which will demonstrate the possible technologies of UCAV (unmanned combat flying objects) and it will not go into serial production for military purposes. Many nations are involved in the programme, such as Italy and Sweden.
The vehicle is an autonomous stealth fighter jet which carries two laser-controlled bombs and can reach up to 990 km/h.

Figure 3: nEUROn jets in use

Many tests have been conducted since 2012 to test the stealth properties of radar and infrared detection. The fighter jet should be able to scout enemy positions as well as attack them. It act according to the principle of “see and not be seen”. As the previous tests have shown, the goals were successfully achieved, as a bomb was successfully dropped in a test area in early 2015.

„RoBattle“ ground vehicle & „BirdEye“ drone (Israel)The two vehicles operate largely fully autonomous, whereby they always work together. They are primarily used for border surveillance and management. For example, if the BirdEye drone detects something suspicious, it notifies the RoBattle that is on its way there. Images from both the drone and the ground vehicle are constantly transmitted to the control centre in real time. For example, if an enemy is detected by RoBattle, it can kill them automatically with the machine gun or inform the command and control center that makes the final decision.

Figure 4: RoBattle and BirdEye communicate with each other and C&C center

The two robots can supervise an area up to 150km from the control center. However, according to media reports, the military agreed that the use of weapons should not be left to the machine at the moment, that means the current and further missions tend to be in semi-autonomous mode.
The robots are currently being tested at the Argentinian-Bolivian border in Argentina.

Samsung SGR-A1 (South-Korea)
This is a stationary security guard robot with automatic surveillance function. It has heat and motion detectors as well as separate cameras for monitoring, tracking and zooming and has a detection range of 2 miles.
It automatically detects targets and can warn them with an optional audible signal and then open fire if the device is in fully autonomous mode. If it is in semi-autonomous mode, a person in the command center is notified in the case of target recognition. This person can give the target an acoustic warning or open fire.

Figure 5: Components of a Samsung SGR-A1

The examples shown above show only a small amount of already available semi-autonomous and fully autonomous devices of the military. It can be assumed that a number of fully autonomous devices and weapons are already in development or in a covert test phase and are not yet known to the public.

But what exactly are the biggest threats posed by LAWs?

First and foremost, the general decision-making power over life and death is given to a machine on the battlefield. A machine, which has an artificial intelligence, learns independently and usually continuously. And since these are simply machines, they have no empathy or a direct distinction between good and evil or right and wrong. Instead, these values are learned and trainined by the machine via neural networks. Therefore, the question also arises at which point in time such a machine is 100% capable of always making the right decision? In my opinion, such a machine should not be used without an error-free friend or enemy classification. Moreover, it should not be justifiable for ethical reasons to leave such decisions to a machine.

In addition, the use of autonomous weapon systems can lower the inhibition threshold to general warfare and thus start many unnecessary conflicts or even military wars, since the warlords in this case no longer put allied human lives at risk. In addition, further conflicts and wars can also result due to wrong decisions of such autonomous weapon systems.

A further question that arises with regard to fully autonomous weapon systems is whether and how these can be protected against hacking attacks, as it is well known that every system can have a security gap. The same problem arises, for example, with autonomous driving. However, in terms of LAWs, to an even more drastic extent, because a single hack attack can cost many human lives.

It should also be considered that once developed, fully autonomous weapons can also fall into the wrong hands, such as terrorists.

Prominent voices on the topic

There are also many prominent voices that have already spoken on the subject of artificial intelligence in the military or on lethal autonomous weapons. Some of them are quoted below.

At first glance, the selected quotations refer to the topic of artificial intelligence in general, but in the context of artificial intelligence they can easily be deduced to the the use of artificial intelligence in warfare and thus the useage of LAWs. Especially Elon Musk seems to be a great advocate of fully autonomous weapons. This may be due to the fact that Musk, through his connection to the Tesla company and the resulting autonomous vehicles, has a solid knowledge of the possible uses, limits and dangers of artificial intelligence.

In 2017, an open letter was written to the United Nations calling for a ban on all fully autonomous lethal weapons, signed by 116 robotics and AI experts from 26 different countries. Among them were Elon Musk and the co-founder of DeepMind, Mustafa Suleyman. A in my opinion very fitting quotation of the letter: “Once this Pandora’s box is opened, it will be hard to close.” This quote should make it clear that as soon as the evil is used in the form of autonomous weapons, this movement can only be stopped with difficulty or not at all. A copy of this letter can be found here: http://bit.ly/2p7S5E1.

In 2015 a similar letter was already written and signed by over 1000 scientists warning against the general use of AI.

Current news and movements

There are some recent reports and movements warning against the use of lethal autonomous weapons or trying to ban them.

In June 2018, for example, Google announced that they would not develop and provide artificial intelligence for military weapons.

The German Bundeswehr also announced in February 2018 that they will apparently not buy and use fully autonomous weapons.

Although according to the current state that no or only a few fully autonomous weapon systems are in use, experts are convinced that various states such as the USA, Great Britain, China, Israel, Russia and South Korea are already developing fully autonomous weapons or testing them undercover.

There are various organisations that are working for a ban on fully autonomous weapons. Examples are the organizations “Stop Killer Robots“ and “Killer Roboter stoppen”. For example, a new petition was launched on 23rd August 2018 by the organisation “Killer Robot Stoppen”, which is intended to demand a ban under international law. As of 12th September 2018, however, this petition had only 118 of the targeted 5,000 signatures.

Already in November 2017 the fictitious Youtube video “Slaugtherbots” was published. It should also warn against the use of fully autonomous weapons. The clip is about fully autonomous drones equipped with explosive devices and operating in a swarm, which hackers use to carry out terrorist attacks on students of a certain political orientation by means of Big Data collected from social networks.

Conclusion

The use of fully autonomous weapons is a very hot topic to be taken seriously. In general, the nation which can first rely completely on fully autonomous weapon systems will grow into a world power and, in my opinion, will gain the upper hand in warfare.

Due to the current tense situation between some nations, the final use of fully autonomous weapon systems could “make the barrel overflowing” and cause in a new world war, in my opinion.

In addition, the fact that technologies for the development of fully autonomous weapon systems are already available, or that these weapon systems are already being developed and quite certainly tested, poses a risk to me as to which nations will actually comply with the prohibition in the case of a ban. What would happen if some nations were to comply with a ban but others were not? For me, this poses the big question of how this movement can be stopped, since it is already on the move.

Not to forget, of course, the fact that autonomous weapon systems or the technologies used to develop them may fall into the wrong hands of organizations, hackers or terrorists. The resulting impact could be disastrous.

All in all, it is clear to me that the ultimate power of decision over life and death should in any case lie exclusively with the human being and should never be imposed on any machine. However, since technology is always making progress and because of the current state of development I fear that it is already too late to stop them and soon fully autonomous weapon systems will be used for modern warfare.

Research & Security questions

In the case of a ban: Who cares about the ban?

How can the development and deployment be stopped in general?

Who bears responsibility if, for example, a civilian is confused with a military target?

How can they be protected against hackers? How secure are they?

What happens if the robot runs amok or is programmed to do so?

How can we ensure that these technologies do not fall into the hands of for example terrorists?