If robots are going to become a vital part of everyday life, it's important that they know the difference between right
and wrong when it comes to decison making. As IEEE Spectrum's Kristen Clark reports, that’s why a team of researchers is
attempting to model moral reasoning in a robot.

If robots are going to become a vital part of everyday life, it's important that they know the difference between right
and wrong when it comes to decison making. As IEEE Spectrum's Kristen Clark reports, that’s why a team of researchers is
attempting to model moral reasoning in a robot. In order to pull it off, they’ll need to answer some important
questions: How can we quantify the fuzzy, conflicting norms that guide human choices? How can we equip robots with the communication
skills to explain their choices in way that we can understand? And would we even want robots to make the same
decisions we’d expect humans to make?

By quantifing relationships among vocabulary, researchers hope to be able to create a network of social norms that robots
can use as a moral roadmap.