Semantic Regularized Class-Conditional GANs for Semi-Supervised Fine-Grained Image Synthesis

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations


Related Research Unit(s)


Original languageEnglish
Number of pages12
Journal / PublicationIEEE Transactions on Multimedia
Online published28 Jun 2021
Publication statusOnline published - 28 Jun 2021


Learning effective generative models for natural image synthesis is a promising way to reduce the dependence of deep models on massive training data. This work focuses on Fine-Grained Image Synthesis (FGIS) in the semi-supervised setting where a small number of training instances are labeled. Different from generic image synthesis tasks, the available fine-grained data may be inadequate, and the differences among the object categories are typically subtle. To address these issues, we propose a Semantic Regularized class-conditional Generative Adversarial Network, which is referred to as SReGAN. We incorporate an additional discriminator and classifier into the generator-discriminator minimax game. Competing with two discriminators enforces the generator to model both marginal and class-conditional data distributions, which alleviates the problem of limited training data and labels. However, the discriminators may overlook the class separability. To induce the generator to discover the distinctions between classes, we construct semantically congruent and incongruent pairs in the generation process, and further regularize the generator by encouraging high similarities of congruent pairs, while penalizing that of incongruent ones in the classifier's feature space. We have conducted extensive experiments to verify the capability of SReGAN in generating high-fidelity images on a variety of FGIS benchmarks.

Research Area(s)

  • Semi-supervised learning, fine-grained image synthesis, generative adversarial networks, semantic regularization