Security for Artificial Intelligence

As intelligent systems become pervasive, safeguarding their security and privacy is critical. For example, adversarially manipulating the perceptual systems of autonomous vehicles may lead to misreading road signs, with possibly catastrophic consequences.
The goal of this project is to design efficient learning systems resilient against sophisticated adversarial manipulations in real-world applications.

Towards this goal, we focus on adversarial learning, an interdisciplinary
field at the intersection of machine learning, security, privacy, and game theory. Special emphasis is placed on understanding the weaknesses of learning systems theoretically and empirically by exploring novel attack strategies, game-theoretic modeling of the dynamics between intelligent adversaries and learning systems, and applying security expertise to
strengthen learning systems.