TY - JOUR
T1 - SVCNet
T2 - Scribble-based Video Colorization Network with Temporal Aggregation
AU - Zhao, Yuzhi
AU - Po, Lai-Man
AU - Liu, Kangcheng
AU - Wang, Xuehui
AU - Yu, Wing-Yin
AU - Xian, Pengfei
AU - Zhang, Yujia
AU - Liu, Mengyang
PY - 2023
Y1 - 2023
N2 - In this paper, we propose a scribble-based video colorization network with temporal aggregation called SVCNet. It can colorize monochrome videos based on different user-given color scribbles. It addresses three common issues in the scribble-based video colorization area: colorization vividness, temporal consistency, and color bleeding. To improve the colorization quality and strengthen the temporal consistency, we adopt two sequential sub-networks in SVCNet for precise colorization and temporal smoothing, respectively. The first stage includes a pyramid feature encoder to incorporate color scribbles with a grayscale frame, and a semantic feature encoder to extract semantics. The second stage finetunes the output from the first stage by aggregating the information of neighboring colorized frames (as short-range connections) and the first colorized frame (as a long-range connection). To alleviate the color bleeding artifacts, we learn video colorization and segmentation simultaneously. Furthermore, we set the majority of operations on a fixed small image resolution and use a Super-resolution Module at the tail of SVCNet to recover original sizes. It allows the SVCNet to fit different image resolutions at the inference. Finally, we evaluate the proposed SVCNet on DAVIS and Videvo benchmarks. The experimental results demonstrate that SVCNet produces both higher-quality and more temporally consistent videos than other well-known video colorization approaches. The codes and models can be found at https://github.com/zhaoyuzhi/SVCNet. © 2023 IEEE.
AB - In this paper, we propose a scribble-based video colorization network with temporal aggregation called SVCNet. It can colorize monochrome videos based on different user-given color scribbles. It addresses three common issues in the scribble-based video colorization area: colorization vividness, temporal consistency, and color bleeding. To improve the colorization quality and strengthen the temporal consistency, we adopt two sequential sub-networks in SVCNet for precise colorization and temporal smoothing, respectively. The first stage includes a pyramid feature encoder to incorporate color scribbles with a grayscale frame, and a semantic feature encoder to extract semantics. The second stage finetunes the output from the first stage by aggregating the information of neighboring colorized frames (as short-range connections) and the first colorized frame (as a long-range connection). To alleviate the color bleeding artifacts, we learn video colorization and segmentation simultaneously. Furthermore, we set the majority of operations on a fixed small image resolution and use a Super-resolution Module at the tail of SVCNet to recover original sizes. It allows the SVCNet to fit different image resolutions at the inference. Finally, we evaluate the proposed SVCNet on DAVIS and Videvo benchmarks. The experimental results demonstrate that SVCNet produces both higher-quality and more temporally consistent videos than other well-known video colorization approaches. The codes and models can be found at https://github.com/zhaoyuzhi/SVCNet. © 2023 IEEE.
KW - Video colorization
KW - scribble-based colorization
KW - temporal aggression
KW - segmentation
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85166467382&origin=recordpage
UR - http://www.scopus.com/inward/record.url?scp=85166467382&partnerID=8YFLogxK
U2 - 10.1109/TIP.2023.3298537
DO - 10.1109/TIP.2023.3298537
M3 - RGC 21 - Publication in refereed journal
SN - 1057-7149
VL - 32
SP - 4443
EP - 4458
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -