An autonomous robot successfully performed soft-tissue surgery for the first time this year. While the idea of having a robot doctor might scare you, scientists say it could be safer than having a human doctor.

Hide Caption

2 of 9

Photos:AI: The dangers and possibilities of autonomous machines

In July this year, a "robocop" called Knightscope K5 knocked down a 16-month-old baby when patrolling a shopping mall in Palo Alto. "The robocop wasn't equipped to detect the child's cries -- a cue that would indicate an ethical violation," AI expert Colin Allen tells CNN.

Hide Caption

3 of 9

Photos:AI: The dangers and possibilities of autonomous machines

The first fatal crash involving a self-driving vehicle occurred on May 7 this year. With an estimated ten million self-driving cars set to roam North Amerian streets in under four years, how cars make decisions in life-and-death situations is becoming an increasingly important question.

Hide Caption

4 of 9

Photos:AI: The dangers and possibilities of autonomous machines

Rolls-Royce unveiled its first driverless vehicle in June this year, a concept car with no steering wheel but a virtual assistant named Eleanor. Owning company BMW's first autonomous car model, iNext, will hit the roads in 2021. It will be built to always protect human lives regardless of material damage, a spokesperson tells CNN, claiming computers make such decisions more efficiently than humans.

Hide Caption

5 of 9

Photos:AI: The dangers and possibilities of autonomous machines

"Humanity's position on this planet depends on its intelligence, so if our intelligence is exceeded, it's unlikely we will remain in charge of the planet," Musk has previously told CNN.

Hide Caption

6 of 9

Photos:AI: The dangers and possibilities of autonomous machines

Many AI units, such as the program AlphaGo which outsmarted the human GO world champion this year, learn by reinforcement learning. Once programmed with basic proficiency the program is set to play millions of games against itself, improving its original techniques by learning from experience.

Hide Caption

7 of 9

Photos:AI: The dangers and possibilities of autonomous machines

"We respond to autonomous machines in a very 'human way,' which makes it easy for programmers to manipulate us," Blay Whitby, whose doctorate tackles the social implications of AI," says.

Story highlights

Some experts predict AI becoming a weapon of mass destruction -- and possibly even the cause of WWIII

Others say robots can be more ethical than humans

(CNN)Machines are rapidly learning to think on their own, but will the robot revolution lead to a modern utopia -- or an apocalypse?

Government officials say autonomous vehicles will make transportation safer, more accessible, more efficient and cleaner and last week, the Department of Transportation released guidelines for the testing and deployment of automated vehicles, which detail how the vehicles should perform, and include a model for state policies.

Self-driving vehicles are just the tip of the autonomous revolution.

In 2016, autonomous robot doctors perform surgery;algorithms invest your money; robocops patrol shopping malls; and if you end up in hospital, a computer system can determine how quickly you get treated.

Many decisions made by autonomous machines have moral implications -- yet little is determined about what ethics machines follow, or who decides what those ethical assumptions should be.

Read More

Machine ethical dilemmas

In Florida in May, Joshua Brown died when an autopilot system did not recognize a tractor-trailer turning in front of his Tesla Model S and his car plowed into it -- the first fatality involving an autonomous vehicle.

JUST WATCHED

Fatal crash sparks Tesla autopilot investigation

MUST WATCH

With an estimated 10 million self-driving cars set to roam North American streets by 2020, how autonomous cars make decisions in life-and-death situations is becoming an important question.

In short, who should the vehicle decide which lives to sacrifice?

Imagine the following scenario. You're in a self-driving car on autopilot. If the car turns right it kills a young child. If it turns left, it will hit and kill a few men. If it does nothing, your own life is sacrificed. Would you want the car to make the judgment for you?

Chris Urmson, head of Google's self-driving car project, pointed out that even humans don't deliberately apply ethical theories in critical situations. "In real time, humans don't do that," he said.

Human agency at risk

JUST WATCHED

IBM Watson and the future of artificial intelligence

MUST WATCH

IBM Watson and the future of artificial intelligence 05:19

While some experts find the lack of governance alarming, others fear autonomous machines eventually will violate human agency; that machines will take away humans' freedom to make their own moral decisions.

"Many issues should be ruled upon by congress and state legislatures and courts -- such as speed levels, when to yield and response to a fire track," Amitai Etzioni, former White House adviser and academic, tells CNN.

Etzioni's main point, however, is that remaining ethical issues should be controlled by the owner of the machine. But, how do we create robots that follow the moral directions of their owner? According to Etzioni, the answer lies in "ethics bots."

AI as a weapon of mass destruction

JUST WATCHED

On GPS: The threat of intelligent machines

MUST WATCH

On GPS: The threat of intelligent machines02:29

Experts like Musk, physicist Stephen Hawking, and Microsoft founder Bill Gates have warned that AI could be more dangerous than nuclear weapons.

In July this year, Musk tweeted a link to the "Skynet" Wikipedia page -- the all-knowing computer network from robot dystopia "Terminator" -- suggesting AI might bring a robot apocalypse. The tweet was a response to a $2 million Defense Advanced Research Projects Agency (DARPA) challenge, encouraging hackers to build an autonomous hacker to be used in warfare.

"Lethal autonomous weapons systems can locate, select, and attack human targets without human intervention," Stuart Russel, who serves on the Scientific Advisory Board for the Future of Life Institute together with the likes of Hawking and Musk, tells CNN.

Robots: Heroes or terminators?

JUST WATCHED

Could robots take over Earth?

MUST WATCH

Whether or not autonomous machines' impact will be positive or negative depends both on who regulates the technology and what it's used for.

Some experts say robots might become more ethical than humans.

In the 1990s, Americans like Bruce McLaren and the husband-and-wife ethicist team Susan and Michael Anderson developed ethical reasoning programs able to "morally outperform" the average person.

"After all, the bar isn't very high. Most human beings are hardly ideal models for how to behave ethically," Anderson, who is working with her husband to incorporate ethical reasoning systems in autonomous machines, tells CNN.

McLaren, on the other hand, is worried about weapon-enabled drones given autonomy and that machines will replace (rather than advise) human decision makers in ethical questions.

"Autonomous cars will lower fatalities while lethal autonomous weapons will lower barriers to warfare and might unintentionally start World War III," Wallach says.

Allen says that just like all technology "one can expect positives and negatives."

"The chief worry is that the people will adjust their behavior to machines, rather than the other way around."