Title:Video-to-Video Synthesis

Abstract: We study the problem of video-to-video synthesis, whose goal is to learn a
mapping function from an input source video (e.g., a sequence of semantic
segmentation masks) to an output photorealistic video that precisely depicts
the content of the source video. While its image counterpart, the
image-to-image synthesis problem, is a popular topic, the video-to-video
synthesis problem is less explored in the literature. Without understanding
temporal dynamics, directly applying existing image synthesis approaches to an
input video often results in temporally incoherent videos of low visual
quality. In this paper, we propose a novel video-to-video synthesis approach
under the generative adversarial learning framework. Through carefully-designed
generator and discriminator architectures, coupled with a spatio-temporal
adversarial objective, we achieve high-resolution, photorealistic, temporally
coherent video results on a diverse set of input formats including segmentation
masks, sketches, and poses. Experiments on multiple benchmarks show the
advantage of our method compared to strong baselines. In particular, our model
is capable of synthesizing 2K resolution videos of street scenes up to 30
seconds long, which significantly advances the state-of-the-art of video
synthesis. Finally, we apply our approach to future video prediction,
outperforming several state-of-the-art competing systems.

Comments:

In NeurIPS, 2018. Code, models, and more results are available at this https URL