Deep learning has attained high accuracies on numerous benchmarks, but we show that such methods can perform surprisingly badly on seemingly innocuous perturbations of the input in both natural language understanding and computer vision. We develop general diagnostic tools based on influence functions to probe, understand, and identify weaknesses in existing methods. Next, we show how an agent can learn by directly interacting with humans. We demonstrate a system that can learn a natural language interface into a simple programming language based on user interaction, and another system that can collaborate with humans via dialogue on a simple item-matching task.

Bio:

Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research interests include modeling natural language semantics and developing machine learning methods that infer rich latent structures from limited supervision. His awards include the IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and the best student paper at the International Conference on Machine Learning (2008).