Thank God someone is now teaching robots to disobey human orders

When you think about teaching robots to say "no" to human commands, your immediate reaction might be, "that seems like a truly horrible idea." It is, after all, the bread and butter of science fiction nightmares; the first step to robots taking over the world.

Consider this: the first robots to do evil deeds will definitely be acting on human orders. In fact, depending on your definition of a "robot" and of "evil," they already have. And the threat of a human-directed robot destroying the world is arguably greater than that of a rogue robot doing so.

That's where Briggs and Scheutz come in. They want to teach robots when to say "absolutely not," to humans.

To do so, the pair have created a set of questions their robots need to answer before they will accept a command from a human:

Knowledge: Do I know how to do X?

Capacity: Am I physically able to do X now? Am I normally physically able to do X?

Goal priority and timing: Am I able to do X right now?

Social role and obligation: Am I obligated based on my social role to do X?

Normative permissibly: Does it violate any normative principle to do X?

These questions work as a simplified version of the calculations humans make every day, except they hew more closely to logic than our thought processes do. There's no, "Do I just not feel like getting out of bed right now" question.