Advances in machine learning have led to rapid and widespread
deployment of software-based inference and decision making (agent), as well as
various applications such as data analytics, autonomous systems, and security
diagnostics. Current machine learning systems, however, assume that training
and testing data are drawn from the same, or similar, distributions, and do not
consider active adversaries manipulating either distribution. In this talk, I
will demonstrate that motivated adversaries can circumvent anomaly detection or
classification models at testing phase through evasion attacks, or inject
well-crafted malicious instances into training data to induce errors in
classification through poisoning attacks. I will discuss my recent research
including evasion attacks, poisoning attacks, and physical attacks for machine
learning systems in adversarial environments.

Bio:

Dr. Bo Li is assistant professor
in the department of Electrical Engineering and Computer Science at University
of Illinois at Urbana–Champaign, and is a recipient of the Symantec Research
Labs Graduate Fellowship in 2015. She was postdoctoral researcher in UC
Berkeley, working with Professor Dawn Song. Her research focuses on both
theoretical and practical aspects of machine learning, security, privacy, game
theory, social networks, and adversarial deep learning. She has designed
several robust learning algorithms, a scalable framework for achieving
robustness for a range of learning methods, and a privacy preserving data
publishing system. She is also active in adversarial deep learning research for
training generative adversarial networks (GAN) and designing robust deep neural
networks against adversarial examples. Her website is
http://www.crystal-boli.com/home.html