SCGAN : Saliency Map-guided Colorization with Generative Adversarial Network

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

14 Scopus Citations
View graph of relations

Related Research Unit(s)


Original languageEnglish
Pages (from-to)3062-3077
Journal / PublicationIEEE Transactions on Circuits and Systems for Video Technology
Issue number8
Online published16 Nov 2020
Publication statusPublished - Aug 2021


Given a grayscale photograph, the colorization system estimates a visually plausible colorful image. Conventional methods often use semantics to colorize grayscale images. However, in these methods, only classification semantic information is embedded, resulting in semantic confusion and color bleeding in the final colorized image. To address these issues, we propose a fully automatic Saliency Map-guided Colorization with Generative Adversarial Network (SCGAN) framework. It jointly predicts the colorization and saliency map to minimize semantic confusion and color bleeding in the colorized image. Since the global features from pre-trained VGG-16-Gray network are embedded to the colorization encoder, the proposed SCGAN can be trained with much less data than state-of-the-art methods to achieve perceptually reasonable colorization. In addition, we propose a novel saliency map-based guidance method. Branches of the colorization decoder are used to predict the saliency map as a proxy target. Moreover, two hierarchical discriminators are utilized for the generated colorization and saliency map, respectively, in order to strengthen visual perception performance. The proposed system is evaluated on ImageNet validation set. Experimental results show that SCGAN can generate more reasonable colorized images than state-of-the-art techniques.

Research Area(s)

  • Colorization, Generative Adversarial Network, Generative adversarial networks, Gray-scale, Hemorrhaging, Image color analysis, Saliency Map, Semantics, Task analysis, Training