Morality for robots?

August 27, 2012

In new book, NIU Professor David Gunkel examines ethical questions raised by 21st century computers, robots and artificial intelligence

David Gunkel

On the topic of computers, artificial intelligence and robots, Northern Illinois University Professor David Gunkel says science fiction is fast becoming “science fact.”

Fictional depictions of artificial intelligence have run the gamut from the loyal Robot in “Lost in Space” to the killer computer HAL in “2001: A Space Odyssey” and the endearing C-3PO and R2-D2 of “Star Wars” fame.

While those robotic personifications are still the stuff of fiction, the issues they raised have never been more relevant than today, says Gunkel, an NIU Presidential Teaching Professor in the Department of Communication.

“A lot of the innovation in thinking about machines and their moral consideration has been done in science fiction, and this book calls upon fiction to show us how we’ve confronted the problem,” Gunkel says. “In fact, the first piece of writing to use the term ‘robot’ was a 1920s play called ‘R.U.R.,’ which included a meditation on our responsibilities to these machines.”

Ethics is typically understood as being concerned with questions of responsibility for and in the face of an “other,” presumably another person.

But Gunkel, who holds a Ph.D. in philosophy, notes that this cornerstone of modern ethical thought has been significantly challenged, most visibly by animal rights activists but also increasingly by those at the cutting edge of technology.

“If we admit the animal should have moral consideration, we need to think seriously about the machine,” Gunkel says. “It is really the next step in terms of looking at the non-human other.”

The NIU professor of communication technology points out that real decision-making machines are now ensconced in business, personal lives and even national defense. Machines are trading stocks, deciding whether you’re credit-worthy and conducting clandestine Drone missions overseas.

“Online interactions with machines provide an even more pervasive example,” Gunkel adds. “It’s getting more difficult to distinguish whether we’re talking to a human or to a machine. In fact, the majority of activity on the Internet is machine traffic—that is, machine to machine. Machines have taken over; it has happened.”

Some machines even have the ability to innovate or become smarter, raising questions over who is responsible for their actions. “It could be viewed as if the programmer who writes the original program is like a parent who no longer is responsible for the machine’s decisions and innovations,” Gunkel says.

Some governments are beginning to address the ethical dilemmas. South Korea, for instance, created a code of ethics to prevent human abuse of robots – and vice versa. Meanwhile, Japan’s Ministry of Economy, Trade and Industry is purportedly working on a code of behavior for robots, especially those employed in the elder-care industry.

He points to the case of South African sprinter and double amputee Oscar Pistorius, nicknamed “blade runner” because he runs on two prosthetic legs made of carbon-fiber. In 2008, Pistorius was restricted from competing in the Beijing Olympics because there was concern that he had an unfair advantage. This decision was successfully challenged, and Pistorius competed in the 2012 London Games.

Similar concerns about the fairness of human augmentation can be seen in the recent crisis “concerning pharmacological prosthetics, or steroids, in professional baseball,” Gunkel says. “This is, I would argue, one version of the machine question.”

Oscar Pistorius

But Gunkel says he was inspired to write “The Machine Question” because engineers and scientists are increasingly bumping up against important ethical questions related to machines.

“Engineers are smart people but are not necessarily trained in ethics,” Gunkel says. “In a way, this book aims to connect the dots across the disciplinary divide, to get the scientists and engineers talking to the humanists, who bring 2,500 years of ethical thinking to bear on these problems posed by new technology.

“The real danger,” Gunkel adds, “is if we don’t have these conversations.”

In “The Machine Question,” Gunkel frames the debate, which in recent years has ramped up in academia, where conferences, symposia and workshops carry provocative titles such as “AI, Ethics, and (Quasi) Human Rights.”

“I wanted to follow all the threads, provide an overview and make sure we’re asking the right questions,” Gunkel says.

He concludes in his new book that the moral community indeed has been far too restrictive.

“Historically, we have excluded many entities from moral consideration and these exclusions have had devastating effects for others,” Gunkel says. “Just as the animal has been successfully extended moral consideration in the second-half of the 20th century, I conclude that we will, in the 21st century, need to consider doing something similar for the intelligent machines and robots that are increasingly part of our world.”

Post navigation

Northern Illinois University psychology researchers Larissa Barber and Alecia Santuzzi coined the term telepressure, defined as the urge to quickly respond to emails, texts and voicemails—regardless of whatever else is happening or whether one is even “at work.” Ultimately, high levels of telepressure can potentially negate the intended advantage of the technology.

Zach Griffith, an engineering major at Northern Illinois University, discusses his research project

The Huskies traveled to San Diego to face Utah State on Dec. 26 in the Poinsettia Bowl.

NIU students, faculty and staff show their love for all things Huskie in this epic #HAPPYNIU music video.