EviD-GAN: Improving GAN With an Infinite Set of Discriminators at Negligible Cost

Aurele Tohokantche Gnanha, Wenming Cao, Xudong Mao, Si Wu, Hau-San Wong*, Qing Li

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Ensemble learning improves the capability of convolutional neural network (CNN)-based discriminators, whose performance is crucial to the quality of generated samples in generative adversarial network (GAN). However, this learning strategy results in a significant increase in the number of parameters along with computational overhead. Meanwhile, the suitable number of discriminators required to enhance GAN performance is still being investigated. To mitigate these issues, we propose an evidential discriminator for GAN (EviD-GAN)-code is available at https://github.com/Tohokantche/EviD-GAN-to learn both the model (epistemic) and data (aleatoric) uncertainties. Specifically, by analyzing three GAN models, the relation between the distribution of discriminator's output and the generator performance has been discovered yielding a general formulation of GAN framework. With the above analysis, the evidential discriminator learns the degree of aleatoric and epistemic uncertainties via imposing a higher order distribution constraint over the likelihood as expressed in the discriminator's output. This constraint can learn an ensemble of likelihood functions corresponding to an infinite set of discriminators. Thus, EviD-GAN aggregates knowledge through the ensemble learning of discriminator that allows the generator to benefit from an informative gradient flow at a negligible computational cost. Furthermore, inspired by the gradient direction in maximum mean discrepancy (MMD)-repulsive GAN, we design an asymmetric regularization scheme for EviD-GAN. Unlike MMD-repulsive GAN that performs at the distribution level, our regularization scheme is based on a pairwise loss function, performs at the sample level, and is characterized by an asymmetric behavior during the training of generator and discriminator. Experimental results show that the proposed evidential discriminator is cost-effective, consistently improves GAN in terms of Frechet inception distance (FID) and inception score (IS), and performs better than other competing models that use multiple discriminators. © 2024 IEEE.
Original languageEnglish
Pages (from-to)6422-6436
Number of pages15
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume36
Issue number4
Online published19 Aug 2024
DOIs
Publication statusPublished - Apr 2025

Funding

This work was supported in part by the Research Grants Council of the Hong Kong Special Administration Region under Project CityU 11206622; in part by the Hong Kong Research Grants Council under General Research Fund under Project 15200023; in part by the National Natural Science Foundation of China under Project 62176223, Project 62302535, Project 62306052, and Project 62072189; in part by the Guangdong Basic and Applied Basic Research Foundation under Project 2022A1515011160; and in part by TCL Science and Technology Innovation Fund under Project 20231752.

Research Keywords

  • Deep learning
  • evidential learning
  • generative adversarial networks (GANs)
  • generative modeling

Fingerprint

Dive into the research topics of 'EviD-GAN: Improving GAN With an Infinite Set of Discriminators at Negligible Cost'. Together they form a unique fingerprint.

Cite this