Title:Robust Zero-Sum Deep Reinforcement Learning

Abstract: We present a method for evaluating the sensitivity of deep reinforcement
learning (RL) policies. We also formulate a zero-sum dynamic game for designing
robust deep reinforcement learning policies. Our approach mitigates the
brittleness of policies when agents are trained in a simulated environment and
are later exposed to the real world where it is hazardous to employ RL
policies. The first problem we address is illustrating and demonstrating to
verify our assumptions that deep RL policies are sensitive to disturbances,
unmodeled dynamics or outright noise. In the second phase, we train two agents
simultaneously in a zero-sum dynamic game; the goal is to drive the system
dynamics to a saddle region. Using a variant of the guided policy search (GPS)
algorithm, we evaluate, test and verify our assumptions. Our agent learns to
adopt robust policies that require less samples for learning the dynamics.