But if robots are thrust into action by military powers, the futurologist warns they will be capable of conjuring up their own "moral viewpoint".

And if they do, the ex-rocket scientist claims they may turn against the very people sending them out to battle.

Dr Pearson, who blogs for Futurizon, told Daily Star Online: "As AI continues to develop and as we head down the road towards consciousness – and it isn't going to be an overnight thing, but we're gradually making computers more and more sophisticated – at some point you're giving them access to moral education so they can learn morals themselves.

"You can give them reasoning capabilities and they might come up with a different moral code, which puts them on a higher pedestal than the humans they are supposed to be serving.

"They might decide themselves that, although they have been told to respect this particular moral viewpoint, actually theirs is more important and they might go off on their own direction which we might not approve of."

“If they are in control of weapons and they decide that they are a superior moral being than the humans they are supposed to be guarding, they might make decisions that certain people ought to be killed in order to protect the larger population”

Dr Pearson

Asked if this could prove fatal, he responded: "Yes, of course.

"If they are in control of weapons and they decide that they are a superior moral being than the humans they are supposed to be guarding, they might make decisions that certain people ought to be killed in order to protect the larger population.

"Who knows what decisions they might take?

"If you have a guy on a battlefield, telling soldiers to shoot this bunch of people, for whatever reason, but the computer thinks otherwise, the computer is not convinced by it, it might conclude that soldier giving the orders is the worst offender rather than the people he's trying to kill, so it might turn around and kill him instead.

"But as systems get more complex there is always a risk that they will malfunction in unpredictable ways, possibly putting your own forces at risk."

"Our main concern is that autonomous weapons, being allowed to operate independently over wider areas, and longer periods of time, cause death or destruction that a human commander is not able to foresee or predict.

"If we don’t know where weapons will be fired, or exactly what they will be fired against, a human can’t really make legal or moral judgements about the effects that they are creating through the use of such systems."

Moyes added: "We believe there needs to be new international law to ensure humans remain in control of weapons systems.

"This is about protecting civilians and human dignity – but it is also a practical issue for militaries.

"Soldiers don’t want to be sent into battle alongside systems that are unpredictable and might go off the rails."