Abstract
This paper presents the first end-to-end network for exemplar-based video colorization. The main challenge is to achieve temporal consistency while remaining faithful to the reference style. To address this issue, we introduce a recurrent framework that unifies the semantic correspondence and color propagation steps. Both steps allow a provided reference image to guide the colorization of every frame, thus reducing accumulated propagation errors. Video frames are colorized in sequence based on the colorization history, and its coherency is further enforced by the temporal consistency loss. All of these components, learned end-to-end, help produce realistic videos with good temporal stability. Experiments show our result is superior to the state-of-the-art methods both quantitatively and qualitatively.
Original language | English |
---|---|
Pages | 8044-8053 |
DOIs | |
Publication status | Published - Jun 2019 |
Event | 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019) - Long Beach, United States Duration: 16 Jun 2019 → 20 Jun 2019 http://cvpr2019.thecvf.com/ |
Conference
Conference | 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019) |
---|---|
Country/Territory | United States |
City | Long Beach |
Period | 16/06/19 → 20/06/19 |
Internet address |
Bibliographical note
Research Unit(s) information for this publication is provided by the author(s) concerned.Research Keywords
- Computational Photography
- Deep Learning