Killer robots like this one from the movie Terminator 3 have fuelled fears and raised questions on the wisdom in pursuing artificial intelligenceReuters

Stanford has initiated a 100-year private, philanthropic study looking into the effects of artificial intelligence on everyday lives.

The announcement is timely and comes after Stephen Hawking recently stating that full artificial intelligence could be the last achievement of mankind.

Thinkers from top institutions are being invited to take part in the effort called AI100.

According to Stanford alumnus and scientist Eric Horvitz, who is a former president of the Association for the Advancement of Artificial Intelligence, the initiative aims at looking into critical issues in the design and use of AI systems, including their economic and social impact.

The AI100 is expected to help identify challenges as well as concerns by beginning a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

"I'm very optimistic about the future and see great value ahead for humanity with advances in systems that can perceive, learn and reason," said Horvitz, who is managing director at Microsoft Research.

Harvard, UCB, Carnegie Mellon and University of British Columbia are partnering Stanford in the initiative.

Hawking's statement

Stephen Hawking's warning that full artificial intelligence "would take off on its own, and re-design itself at an ever increasing rate," sent fears fuelled by pictures of machines running amok in popular Hollywood sci-fi thrillers like 2001: A Space Odyssey, Robot and Terminator.

The recent blockbuster Interstellar also featured a creature of AI, albeit a gentle and humourous one.

Opinions have varied with most experts choosing to portray confidence rather than fear.

Nick Bostrom, director of a programme on the impacts of future technology at the University of Oxford, told AFP the threat of AI superiority was not immediate.

"I think machine intelligence will eventually surpass biological intelligence—and, yes, there will be significant existential risks associated with that transition," he said.

Tony Cohn, a professor of automated reasoning at Leeds University in northern England, said full AI is "still a long way off... not in my lifetime certainly."