Generating Videos with Dynamics-aware
Implicit Generative Adversarial Networks


ICLR 2022

Sihyun Yu*,1
Jihoon Tack*,1
Sangwoo Mo*,1
Hyunsu Kim2
Junho Kim2
Jung-Woo Ha2
Jinwoo Shin1

1 KAIST     
2 NAVER AI Lab

*Denotes equal contribution

[Paper]
[Code]
[Slides]

Note: All videos are presented as 8 frame-per-second (FPS), unless otherwise specified.

Abstract

In the deep learning era, long video generation of high-quality still remains challenging due to the spatio-temporal complexity and continuity of videos. Existing prior works have attempted to model video distribution by representing videos as 3D grids of RGB values, which impedes the scale of generated videos and neglects continuous dynamics. In this paper, we found that the recent emerging paradigm of implicit neural representations (INRs) that encodes a continuous signal into a parameterized neural network effectively mitigates the issue. By utilizing INRs of video, we propose dynamics-aware implicit generative adversarial network (DIGAN), a novel generative adversarial network for video generation. Specifically, we introduce (a) an INR-based video generator that improves the motion dynamics by manipulating the space and time coordinates differently and (b) a motion discriminator that efficiently identifies the unnatural motions without observing the entire long frame sequences. We demonstrate the superiority of DIGAN under various datasets, along with multiple intriguing properties, e.g., long video synthesis, video extrapolation, and non-autoregressive video generation. For example, DIGAN improves the previous state-of-the-art FVD score on UCF-101 by 30.7% and can be trained on 128 frame videos of 128x128 resolution, 80 frames longer than the 48 frames of the previous state-of-the-art method.


Main results

Randomly selected examples of generated videos (unconditional) on diverse datasets.

UCF-101 Sky Time-lapse

Tai-Chi-HD Kinetics-food

Long video generation

128 frame videos of 128×128 resolution on Tai-Chi-HD. We set 30 FPS, which is fully identical to the real videos.

Example 1 Example 2

256 frame videos of 128×128 resolution on Tai-Chi-HD. We set 30 FPS, which is fully identical to the real videos.

Example 1 Example 2

Randomly selected examples of 256 frame videos of 128×128 resolution on Tai-Chi-HD.


Randomly selected examples of 64 frame videos of 128×128 resolution on UCF-101.


Time interpolation

Interpolated video with 8× larger FPS.

Sky Tai-Chi-HD

8 FPS 64 FPS 8 FPS 64 FPS

Time extrapolation

Extrapolated videos (4× longer): the red box denotes extrapolated frames (Figure 4, Table 2).

MoCoGAN-HD DIGAN (ours) MoCoGAN-HD DIGAN (ours)

Diverse motion sampling

Videos sampled from two different random motion vectors.

Sky Kinetics UCF-101

Vid. 1 Vid. 2 Diff. Vid. 1 Vid. 2 Diff. Vid. 1 Vid. 2 Diff.

Space interpolation

Upsampled videos from 128×128 to 512×512 resolution (4× larger) by various methods.

Original (128×128) Nearest Bilinear Bicubic DIGAN (ours)

Upsampled videos from 128×128 to 1024×1024 resolution (8× larger) by our method.

Sky

Tai-Chi-HD

Space extrapolation

Zoomed out videos (1.5×): the red box denotes extrapolated pixels (Figure 8).

Sky Time-lapse UCF-101 Kinetics-food

Class-conditional generation

Randomly selected examples of generated videos (class-conditional) on UCF-101.


Acknowledgements

SY thanks Jaeho Lee, Younggyo Seo, Minkyu Kim, Soojung Yang, Seokhyun Moon, and Jin-Hwa Kim for their helpful feedbacks on the early version of the manuscript. SY also acknowledges Ivan Skorokhodov for providing the implementation of INR-GAN. This work was partly experimented on the NAVER Smart Machine Learning (NSML) platform. This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.

Bibtex

@inproceedings{yu2022digan,
     title={Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks},
     author={Yu, Sihyun and Tack, Jihoon and Mo, Sangwoo and Kim, Hyunsu and Kim, Junho and Ha, Jung-Woo and Shin, Jinwoo},
     booktitle={The Tenth International Conference on Learning Representations},
     year={2022},
     url={https://openreview.net/forum?id=Czsdv-S4-w9}
}

@inproceedings{skorokhodov2021adversarial,
     title={Adversarial generation of continuous images},
     author={Skorokhodov, Ivan and Ignatyev, Savva and Elhoseiny, Mohamed},
     booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
     pages={10753--10764},
     year={2021}
}