Temporal Regularization Makes Your Video Generator Stronger

Harold Haodong Chen1,2 Haojian Huang1,4 Xianfeng Wu1,2 Yexin Liu1,2
Yajing Bai1,2 Wen-Jie Shu1,2 Harry Yang1,2 Ser-Nam Lim1,3
1Everlyn AI 2HKUST 3UCF 4HKU

[Paper]    


Abstract

Temporal quality is a critical aspect of video generation, as it ensures consistent motion and realistic dynamics across frames. However, achieving high temporal coherence and diversity remains challenging. In this work, we explore temporal augmentation in video generation for the first time, and introduce FluxFlow for initial investigation, a strategy designed to enhance temporal quality. Operating at the data level, FluxFlow applies controlled temporal perturbations without requiring architectural modifications. Extensive experiments on UCF-101 and VBench benchmarks demonstrate that FluxFlow significantly improves temporal coherence and diversity across various video generation models, including U-Net, DiT, and AR-based architectures, while preserving spatial fidelity. These findings highlight the potential of temporal augmentation as a simple yet effective approach to advancing video generation quality.

Overview

Evaluations

CogVideoX-2B

NOVA

VideoCrafter2

BibTeX

@article{chen2024omnicreator,
    title={OmniCreator: Self-Supervised Unified Generation with Universal Editing},
    author={Chen, Haodong and Wang, Lan and Yang, Harry and Lim, Ser-Nam},
    journal={arXiv preprint arXiv:2412.02114},
    year={2024}
}

Project page template is borrowed from DreamBooth.