We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-world video frames. Please refer to our website for more details: http://visualdynamics.csail.mit.edu/.

Bio:

Tianfan Xue is currently a fifth-year Ph.D. student in MIT CSAIL, working with William T. Freeman. Before that, he received his B.E. degree from Tsinghua Universtiy, and M.Phil. degree from The Chinese University of Hong Kong. His research interests include computer vision, image processing, and machine learning. Specifically, he is interested in motion estimation and image and video processing based on the motion information.