Title:Deep Tracking on the Move: Learning to Track the World from a Moving Vehicle using Recurrent Neural Networks

Abstract: This paper presents an end-to-end approach for tracking static and dynamic
objects for an autonomous vehicle driving through crowded urban environments.
Unlike traditional approaches to tracking, this method is learned end-to-end,
and is able to directly predict a full unoccluded occupancy grid map from raw
laser input data. Inspired by the recently presented DeepTracking approach
[Ondruska, 2016], we employ a recurrent neural network (RNN) to capture the
temporal evolution of the state of the environment, and propose to use Spatial
Transformer modules to exploit estimates of the egomotion of the vehicle. Our
results demonstrate the ability to track a range of objects, including cars,
buses, pedestrians, and cyclists through occlusion, from both moving and
stationary platforms, using a single learned model. Experimental results
demonstrate that the model can also predict the future states of objects from
current inputs, with greater accuracy than previous work.