Will Future Wars be Fought with Robots?

Replicating human behaviour in robots has long been a central objective of scientists working in the field of information and communication technologies (ICT). However, a major obstacle towards accomplishing this has been controlling the interaction between movement and vision. Indeed, achieving accurate spatial perception and smooth visual-motor coordination have proved elusive.

Tackling this issue was the main aim of an EU-funded project EYESHOTS ('Heterogeneous 3-D perception across visual fragments'). By simulating human learning mechanisms, the project successfully built a prototype robot capable of achieving awareness of its surroundings and using its memory to reach smoothly for objects.

The implications of this breakthrough are not limited to potential improvements in robotic mechanics - they will also help to achieve better diagnoses and rehabilitation techniques for degenerative disorders such as Parkinson's disease.

The project began by examining human and animal biology. A multi-disciplinary team involving experts in robotics, neuroscience, engineering and psychology built computer models based on neural coordination in monkeys (very similar to how human coordination works).

The key was recognising that our eyes move so quickly that the images produced are in fact blurred - it is up to the brain to piece together these blurred fragments and present a more coherent image of our surroundings.

Using this neural information, the project built a unique computer model that combined visual images with movements of both eyes and arms, similar to what occurs in the cerebral cortex of the human brain.

In effect, the project was built on the premise that being fully aware of the visual space around you can only be achieved through actively exploring it. This, after all, is how humans learn to understand the physical world - by looking around, reaching out and grabbing things.

In everyday life, the experience of the 3D space around us is mediated through movements of the eyes, head and arms, which allow us to observe, reach, and grasp objects in the environment. From this perspective, the motor system of a humanoid robot should be an integral part of its perceptual machinery.

The end result of this approach is a humanoid robot that can move its eyes and focus on one point, and even learn from experience and use its memory to reach for objects without having to see them first. The robotic system comprises a torso with articulated arms and a robot head with moving eyes.

Through the application of neuroscience, the EYESHOTS project, completed in 2011, successfully identified a means of giving robots a sense of sight similar to human vision. This represents an important milestone in creating a humanoid robot that can interact with its environment and perform tasks without supervision.

Low-cost, high-performance sensors, such as digital cameras, and advances in computer vision and artificial intelligence are bringing the age of robots ever closer, says QUT robotics scientist Dr Feras Dayoub.
Dr Dayoub said robots such as the Merry Porter mobile robot had already been designed to perform routine delivery tasks in hospitals while others, such as the cuddly seal-like Paro, were being used therapeutically.

"But potentially robots will play a much wider role in our society," he said.

"Functional and useful mobile service robots will have to be able to share physical spaces with humans and deal with a dynamic and ever-changing world.

"They will need to have a map and adapt to changes in the arrangement of objects and the appearance of the environment, changes that may be spontaneous, discontinuous and unpredictable, as a result of human activities."

Dr Dayoub is using an Adept Guiabot mobile robot to develop methods which enable a mobile robot to navigate through a changing environment, such as a family home.

"In a home, furniture will be moved, visitors will come and go, not much will stay exactly the same from day to day and robots will need to learn how to adapt to this dynamic setting."

The Adept Guiabot will be demonstrated at Robotronica, a free robot spectacular being held at QUT's Gardens Point Campus on Sunday, August 18. Robotronica will see world-leading robots from Europe and Japan come to QUT for a free public event designed to encourage people to learn about and interact with robots from the controversial life-like variety to the downright cute.

Dr Dayoub said a robot would be a lifetime investment for a family and for many that robot would be seen as part of their family.

"The main aim of any technology is to make life easier," he said.

"One day robots will help the elderly and disabled live a normal life in their own homes without having to have a full-time human helper. In that way a robot will give them freedom.

"It is highly likely that robots will one day be able to perform duties such as those of a tour guide, receptionist, and even a night watchman or a fire fighter."

Dr Dayoub and PhD student Timothy Morris are undertaking an Australian Research Council-funded project ($500,000) titled "Lifelong robotic navigation using visual perception" under leadership of QUT Professors Peter Corke and Gordon Wyeth and in collaboration with researchers in Oxford University, UK.

"As humans we can see our environment and navigate our way through it, but robots need to be fitted with cameras and multiple sensors to enable them to sense their environment," Dr Dayoub.

"While it's possible to program a robot to do certain tasks and navigate through a repetitive, structured environment, such as corridors, it is a completely different thing to create a robot that can sense and adapt to a changing environment, such as a home?

"However as many researchers around the world are working on the same topic, our collaboration is global.

"Every time a scientist publishes a paper on their research, it adds to the collective pool of knowledge.

"While we are working on the navigational aspects of robotic research, researchers elsewhere are concentrating on how to make robotic arms work while yet others are working on communication/interactions between robots and humans.

"Eventually, we will have perfected all the parts that will make up an effective robot."

But he said there was a range of social issues to consider alongside the technical ones.

"How will robots be monitored? How will they deal with an emergency? How will people react to robots? With their cameras and sensors they will have the capacity to record everything.

"There are multiple levels of complication surrounding robotic research and the technology to advance the research is improving all the time. We will get there eventually."

*******

How Robots Will Change the World - BBC Documentary

*******

Robot wars: after drones, a line we must not cross

﻿We are on the dangerous threshold of investing in machines the power to make autonomous life-or-death decisions over humans

Drones are becoming dated technology: we may now be able to hand over some of the life-and-death decisions of war to robots.

(left: In I, Robot, intelligent machines attempt to overthrow humanity. Photograph: Allstar/20th Century Fox)
From the perspective of those engaged in modern warfare, lethal autonomous robots (LARs) offer distinct advantages. They have the potential to process information and to act much faster than humans in situations where nanoseconds could make the difference. They also do not act out of fear, revenge or innate cruelty, as humans sometimes do.

A drone still involves a human "in the loop" – someone, somewhere presses the button. This is slowed down by satellite communications (think of the time-lag when foreign correspondents speak on TV) and these communications can be interrupted by the enemy. So why not take the human "out of the loop", and install an on-board computer that, independently, is able to identify and to trigger deadly force against targets without human intervention?

There are good reasons to be cautious about permitting this.

On a practical level, it is hardly clear that robotic systems can meet the minimum requirements set by the law of war for lethal decision-making. Popular culture, including sci-fi, celebrates the capabilities of robots, but robots are good at what they do only within a narrow range: their sensors give them tunnel-vision information and they are largely wired for quantitative work.

Soldiers in battle may lawfully target only combatants, and not civilians. Will a computer be able to make the value judgment that a group of people in plain clothing carrying rifles are not enemy combatants but hunters – or soldiers surrendering?

Civilian loss of life as "collateral damage" can be lawful only if it is proportionate to the military objective. This is essentially a qualitative judgement, requiring in many cases experience and common sense and an understanding of the larger picture that robots do not have.

It is also not clear who is to be held responsible if things go wrong. Yet it makes little sense to punish a robot.

The increased availability of weapons that place a state's soldiers out of harm's way may make it easier for those states to go to war, and lead to ongoing and global (if low-intensity) warfare – as well as targeted killings. This may have far-reaching implications for the international security system that has saved the last three generations from the scourge of global war.

The overriding question of principle, however, is whether machines should be permitted to decide whether human beings live or die.

Human beings are frail, flawed and, indeed, can be "inhumane"; but they also have the potential to rise above the minimum legal standards for killing. By definition, robots can never act in a humane way. If human beings are taken out of the loop, so are not only the shortcomings of humans, but also our redeeming features.

Robots may, in some respects, not be predictable enough to be used in war: even technicians will not know exactly what to expect from machines that make their own choices, and the average commander in the field who deploys them will be even more at a loss. In other respects, LARs may be too predictable: treating everyone according to the same algorithms means brushing aside the uniqueness of each individual.

But the situation is complex. While LARs pose a clear threat in some cases, there is also the argument that under certain circumstances, using robots may, in fact, save lives. For example, human soldiers who detect movement may fire, afraid it is a sign of enemy soldiers, when, in reality, their "target" may be civilians in hiding. A robotic soldier, which does not fear for its life, may be deployed to go closer and to investigate. Likewise, robots in some cases could more precisely target their fire.

The problem is that even if this is correct, it is not clear that the current laws of war, and the levels of capacity of the soldiers in the field, are sufficient to confine the use of LARs to those situations where they can possibly save lives. But more importantly, does it not demean the value of the lives of each one of us to know that it has become part of the human condition that we could potentially become collateral damage in the calculations of a machine?

This calls for a cool assessment. On the one hand, there is the danger that we overestimate the abilities of computers – because they beat us at chess and maths, we may defer to them regarding decisions that they are not equipped to take. On the other hand, we should not be closed to investigating situations where they can possibly serve to preserve life.

To some extent, we have already given some control to machines over individual targeting decisions with various long-distance weapons. But there is an important, if imperceptible line that we should not cross: humanity should not surrender meaningful control over questions of life and death to machines.

UK foreign minister Alistair Burt gave the assurance during a debate on the issue in the House of Commons on 17 June that the UK was not developing such weapons, and had no current plans to do so. The United States took a further step in the right direction when the Department of Defense in November 2012 formalised their position and issued a directive that commanders and operators shall retain "appropriate levels of human judgment over the use of force". These initiatives should be consolidated and other states should be encouraged to follow the same route.

War without ongoing reflection on the human cost is mechanical slaughter. The current prospect of entering a world where machines are explicitly mandated to kill humans should give pause to all of us. While technology rushes forward, we need to take some time out to ensure that not only lives, but also a concept of the value of human life, are preserved in the long term.