Imagine being a soldier, riding in a tank through a city in a war zone. Among the civilians and innocent private citizens, you are to distinguish potential threats and hostile combatants. Despite training and military experience, it is almost impossible to make the right decision in such a chaotic and incredibly stressful situation. The complex human brain is amazing in many ways, but, like all systems, it is less than perfect. Right?

“In recent years, people have started to take advantage of new advances in neuroscience and used it in the development of new weapon systems”, explains Gregor Noll, professor of public international law at Lund University, whose research focuses on the laws of war, refugee law and human rights.

He has specialised in the issue of the role of neuroweapons in future warfare, and the potential problems they entail – legal issues as well as dystopian visions of a future of apocalyptic dimensions. A key question he asks is whether the new weapon systems, if implemented and used in combat, are compatible with the laws of war and international humanitarian law.

To illustrate the problem, we return to the soldiers in the tank. Gregor Noll describes a system that today exists as a prototype:

“On the outside of the tank, several cameras have been installed that take pictures of the surroundings in all directions. The images are then processed in a computer program that filters out any non-essential components in the image. The images are then displayed at a very fast pace to a selected soldier. The images are displayed so fast that the soldier is unable to analyse them on a conscious level. Meanwhile, the soldier is wearing a ‘shower cap’ with electrodes that sense the electrical currents flowing in different parts of the brain. Recognition – when the brain reacts to something the soldier finds familiar – generates specific neurological patterns which are detected by the computer program. When such a pattern is registered, the image that caused it is once again displayed to the soldier who, on a subconscious level, recognised a potential threat.”

“The role of neuroweapons in practical warfare would primarily be to streamline and improve the precision in target search – like a highly sophisticated telescopic sight where the human brain is connected to a computer”, says Gregor Noll.

“This could save resources and also make it easier for parties in combat to abide by the laws of war – which prohibit you from killing civilians and destroying hospitals and other civilian institutions”, he explains.

Isn’t that a good thing?

“Unfortunately, the software in the system was not exactly sent down to the soldiers from a benevolent god.”

People under a lot of stress – like the soldiers in the tank – may not be the world’s most reliable decision-makers. A system that does not struggle with human irrationality and does not suffer from cognitive overload in the chaos that exists in a battlefield sounds like a good idea. But Gregor Noll argues that it is not as “simple” as it sounds.

“Unfortunately, the software in the system was not exactly sent down to the soldiers from a benevolent god”, says Gregor Noll. “The computer programs are created by humans. And the development of the software is based on their perceptions of how people work, how war works and how people work when in war”, he continues.

Gregor Noll, professor of public international law at Lund UniversityPhoto: Catrin Jakobsson

“Everything depends on human decisions, taken outside the actual battlefield, and these in turn depend on the conditions you program into the system”, says Gregor Noll, and continues:

“What worries me is that it’s difficult to make anyone accountable. In traditional warfare there is a clear hierarchical order in which the general gives orders and the private obeys. But if the private is connected to an advanced neuroweapon system – who is to be held accountable? It becomes difficult to determine where the line is drawn between human and machine.”

Is it possible to stay within the limits of the laws of war? According to a convention from 1977, a state in the process of acquiring new weapons must investigate whether these can be used within the boundaries of international humanitarian law. When it comes to neuroweapon systems, like the one described by Gregor Noll, this is very difficult to determine. The conclusions drawn depend on whether or not you trust the system’s decision-making.

“It becomes problematic when we, due to our own cognitive insufficiencies, attribute superiority that may not be justified to a computer program”, says Gregor Noll, and brings up another example concerning how we already today make important decisions based on the simple fact that we trust the technology.

“I don’t know how it works when my computer tells me where to invest my money, but I trust that the people who wrote the program are wise…”.

Likewise, the weapon system in our fictional tank is designed to make the soldier feel safe in knowing that the computer is making the right decision in a military operation.

“When will we reach the point where a person’s ability to control the outcome of warfare using neuroweapon systems over time is so limited that it all comes down to trust?”, asks Gregor Noll and continues:

“And when we reach that point, has the power of decision-making been delegated by the person to the complex weapon systems – a process not necessarily detected even by the programmers themselves?”

Today’s artificial intelligence has the capacity for deep learning, i.e. it develops and improves itself without the programmer’s help.

“It’s a bit like releasing a child into the world – we have no idea what the child will do”, says Gregor Noll.

Gregor Noll points out that this example illustrates in a good way the key issues surrounding the debate on “singularity” – that we are heading towards a point where humanity loses control of technology which continues to develop itself (robots taking over the world).

Is this pure science fiction or a possible future scenario? This and other issues will be discussed at the event “The Dark Side of Neurotechnology: What are neuroweapons and are they lawful?” taking place during the science week, “The Amazing Brain”. The event will be held in the Pufendorf Hall at the Faculty of Law on 5 September at 16:00–18:00.