I am a research scientist at Google Brain working at the intersection of machine learning and computer security. My most recent line of work studies properties of neural networks from an adversarial perspective. I received my Ph.D. from UC Berkeley in 2018, and my B.A. in computer science and mathematics (also from UC Berkeley) in 2013.

Generally, I am interested in developing attacks on machine learning systems; most of my work develops attacks demonstrating security and privacy risks of these systems.
I have received four best paper awards (including at ICML and IEEE S&P), and my work has been featured in
the New York Times,
the BBC,
Science Magazine,
Wired, and
Popular Science.

At ICML this year, I presented a paper I wrote with Anish Athalye and my
advisor David Wagner:
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples.
In this paper, we demonstrate that most of the ICLR'18 adversarial example
defenses were, in fact, ineffective at defending against attack and in
fact just broke existing attack algorithms. We introduce stronger
attacks that work in the presence of what we call “obfuscated
gradients”.
Because we won best paper, we were able to give two talks, the talk
linked here is plenary talk where I argue that the evaluation methodology
used widely in the community today is insufficient, and can be improved.

Earlier this year, at the IEEE Deep Learning and Security Workshop,
I received the best paper award for a paper
with my advisor David Wagner
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. In this paper, we demonstrate that it is possible
to construct two audio samples that sound nearly indistinguishable but
where a machine learning algorithm would recognize them completely
differently.
[a]
This paper in part builds on our
prior work,
where we constructed audio that
sounds like noise to humans but speech to machine learning algorithms.
This demonstration picked up a few rounds of press and was covered by
the New York Times,
Tech Crunch,
and
CNET (among others).

In 2017 at IEEE S&P I received the best student paper award for a paper
with my advisor David Wagner
Towards Evaluating the Robustness of Neural Networks.
In this paper, we introduce a class of attacks for generating adversarial
examples based on optimization methods using gradient descent. We argue that
iterative optimization-based attacks are significantly more effective than
prior attacks, and demonstrate that fact on multiple datasets.