Anca Dragan has a cool name, an impressive CV and an important job. While many roboticists focus on making AI better, faster and smarter, Dragan is also concerned about robot quality control. In anticipation of robots moving into every area of our lives, she wants to ensure our interactions with robots are positive ones. The computer scientist and robotics engineer is a principal investigator with UC Berkeley’s Center for Human-Compatible AI. “One particular area of interest is the problem of value alignment,” says Dragan. “How do you ensure that an artificially intelligent agent–be it a robot a few years from now or a much more capable agent in the future–how do you make sure that these agents optimize the right objectives? How do we teach them to optimize what we actually want optimized?”

Preventing undesirable robot behavior is becoming a priority as robots get more intelligent, more nimble and increasingly autonomous. It’s something a lot of people feel uneasy about, even if most of us don’t know enough about AI to put our concerns into quite the right words.

Paul Gordon is the publisher and editor of iState.TV. He has published and edited newspapers, poetry magazines and online weekly magazines.
He is the director of Social Cognito, an SEO/Web Marketing Company. You can reach Paul at pg@istate.tv