Killer robots with A.I. are worrying scientists

CLICK ABOVE to listen to discussion with Keelin Shanley on the dangers of killer robots with A.I. on Today with Sean O’Rourke (broadcast 5th August 2015)

Scientists are worried about how mankind will control robots with advanced built in artificial intelligence (Credit: Warner Bros)

Huge advances in in robotics and artificial intelligence mean that intelligent ‘killer robots’ could be ‘living’ among us in just a few years, and scientists and experts in the field are worried.

Origins

Artificial intelligence is the name given to how scientists try and replicate human intelligence in a computer. At its most basic it is software based on mathematics.

The scientific ‘father’ of A.I., as it is called, is Alan Turing, the brilliant English mathematician and code-breaker whose life was portrayed in The Imitation Game last year which many listeners will have seen.

We can, in fact, lay claim to Turing for Ireland, as he was half Irish. His mother, Ethel Sara Stoney, was Irish, attended Alexandra College in Milltown Dublin, and was part of a famous Anglo-Irish scientific family.

Ethel’s relations included George Stoney, the scientist who invented the term electron, and after whom a street in Dublin’s Dundrum is named; as well as Edith Stoney, regarded as the first woman medical physicist.

Turing’s idea was that a machine, using a mathematical alphabet which consisted of just two numbers, 0 and 1 could solve any problem.

This machine was the Turing Universal machine, and Turing came up with the idea, as far back as 1936, when he was just 24-years old.

At some point in the not-too distant future, machines will surpass humans in general intelligence. At that point machines will replace humans as the dominant ‘life form’ on Earth. Life here will have entered its post-biological phase. We’ll be extinct.

Sufficiently, intelligent machines could improve themselves, to reach an even higher level of intelligence, without the need for humans.

The fate of humans, whether they continued to exist or not, would, be dependent on the whim of the machine super intelligence.

Our relationship to the super intelligence would be like the relationship gorillas, for example, have with humans today. We’d be endangered, or doomed.

Thinkers like Bostrom, and futurist Ray Kurzweil, talk about a moment called a ‘technological singularity’ when A.I. becomes truly super intelligent.

This is the moment when a computer or a robot with A.I. becomes capable of designing better, more intelligent versions of itself.

Rapid repetitions of this would result in an intelligence explosion, and very quickly, a super intelligence would emerge, way beyond human intelligence.

It would be like putting evolution into super-fast forward, and our own slow biological evolution would be unable to compete with this.

This super intelligence might be able to solve problems, and answer questions which have proved beyond the capabilities of human beings to solve.

Scientists argue as to when this moment might arrive, Kurzweil, predicts it will be with us by 2045, some have argued it will be with us as early as 2030.

Threat

No-one is agreed on how best to deal with unregulated ‘autonomic weapons’ or with the prospect of hostile super intelligent machines.

The aforementioned Elon Musk, the SpaceX entrepreneur, has put $10 million of his money into projects aimed at keeping A.I. ‘under control’ and ‘beneficial’.

We would try and build in elements that would prevent A.I. machines from turning on humans, like with the protective Terminator in the Hollywood film.

We might do well to take on board ‘The Three Laws of Robotics’ devised by brilliant science fiction author Isaac Asimov (author of I, Robot) back in 1942.

These are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given it by human beings, except where such orders would conflict with the first law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Future

Or perhaps our future is to become cyborgs, to adopt and incorporate this immense A.I. intelligence as part of our own existence.

We could decide to ditch our biology, and to become a race of super intelligent, immortal machines.

Our ‘primitive’ fragile, biological beginnings may, in time become forgotten.