Why Artificial Intelligence Weapons are so scary

In a recent article in The Atlantic, LSE philosopher of science Bryan W. Roberts argues that humans are what make AI weapons so scary.

Artificial Intelligence weapons turn out to be closer than many realise. Google’s self-driving cars and quad-copter drones are just inches away from being turned into artificial intelligence weapons.

Google software engineer Zach Musgrave and LSE philosopher of science Bryan W. Roberts argue in a recent article that the problem with AI weapons is not that they will lead to robot takeovers, but that they will be too easily corrupted and hacked by humans, writing,

“This is the immediate danger with AI weapons. They are easily converted into indiscriminate death machines, far more dangerous than the same weapons with a human at the helm.”