Peter Asaro

Dr. Asaro is Associate Professor in the School of Media Studies at the New School in New York City. He is the co-founder of the International Committee for Robot Arms Control, and has written on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, aerial drones and autonomous vehicles.

Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons?

The Convention on Certain Conventional Weapons (CCW) at the UN has just concluded a second round of meetings on lethal autonomous weapons systems in Geneva, under the auspices of what is known as a Group of Governmental Experts. Both the urgency and significance of the discussions in that forum have been heightened by the rising concerns over artificial intelligence (AI) arms races and the increasing use of digital technologies to subvert democratic processes.

I have been asked by Science & Film to review the realism of EYE IN THE in terms of the new technologies we see deployed in the film. Most of the technologies employed in the film narrative have some basis in reality, though many are still in very early stages, or proof-of-concept, and remain far from the reliable and useful technologies depicted in the film.

Last week the Future of Life Institute released a letter signed by some 1,500 artificial intelligence (AI), robotics and technology researchers. Among them were celebrities of science and the technology industry—Stephen Hawking, Elon Musk and Steve Wozniak—along with public intellectuals such as Noam Chomsky and Daniel Dennett. The letter called for an international ban on offensive autonomous weapons, which could target and fire weapons without meaningful human control.

This article considers the recent literature concerned with establishing an international prohibition on autonomous weapon systems. It seeks to address concerns expressed by some scholars that such a ban might be problematic for various reasons. It argues in favour of a theoretical foundation for such a ban based on human rights and humanitarian principles that are not only moral, but also legal ones. In particular, an implicit requirement for human judgement can be found in international humanitarian law governing armed conﬂict.

Pages

"“It is clear that the Pentagon aims to build out Project Maven to armed drones, and its functionality does not need much adjustment to become a target recognition system, carried out by an armed drone, that could function without meaningful human control,” said Peter Asaro, an Associate Professor at the School of Media Studies at The New School and a co-author of the letter."

"In this week’s episode of Yahoo News’ Unfiltered, we talk to New School professor Peter Asaro about the dangers of artificial intelligence technology and autonomous weapons. In an age when AI plays a dominant role in pop culture, some may view Slaughterbots as science fiction, but for Asaro, it’s the not-so-distant future.

"More than 90 academics in artificial intelligence, ethics, and computer science released an open letter today that calls on Google to end its work on Project Maven and to support an international treaty prohibiting autonomous weapons systems.

"Speaking to a crowd of captivated futurists at the Speakeasy in Downtown Austin at #SXSW, Peter Asaro warned of a future in which “autonomous weapon systems are delegated with the authority to initiate the use of lethal force” — in other words, a world where killer robots get to decide who lives and dies.

CIS Affiliate Scholars Peter Asaro, Ryan Calo and Woodrow Hartzog are listed as participants for We Robot 2014. Robotics is becoming a transformative technology. We Robot 2014 builds on existing scholarship exploring the role of robotics to examine how the increasing sophistication of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, and even to the battlefield disrupts existing legal regimes or requires rethinking of various policy issues. If you are on the front lines of robot theory, design, or development, we hope to see you.

The motion under debate will be:“Should there be an absolute ban on autonomous systems capable of using lethal force?” Two key speakers will argue for and against the motion, and respond to each other’s presentation. This will be followed by a discussion session with the audience, and a public vote.

Famed physicist Steven Hawking warns that while success in creating artificial intelligence would be the biggest event in human history, it may also be our last. What can we do to prepare ourselves now before it's too late?