Title:
Divide-and-Conquer Reinforcement Learning

Abstract: Standard model-free deep reinforcement learning (RL) algorithms sample a new
initial state for each trial, allowing them to optimize policies that can
perform well even in highly stochastic environments. However, problems that
exhibit considerable initial state variation typically produce high-variance
gradient estimates for model-free RL, making direct policy or value function
optimization challenging. In this paper, we develop a novel algorithm that
instead optimizes an ensemble of policies, each on a different "slice" of the
initial state space, and gradually unifies them into a single policy that can
succeed on the whole state space. This approach, which we term
divide-and-conquer RL, is able to solve complex tasks where conventional deep
RL methods are ineffective. Our results show that divide-and-conquer RL greatly
outperforms conventional policy gradient methods on challenging grasping,
manipulation, and locomotion tasks, and exceeds the performance of a variety of
prior methods.

Comments:

Videos of policies learned by our algorithm can be viewed at this https URL