EviD-GAN : Improving GAN With an Infinite Set of Discriminators at Negligible Cost
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Number of pages | 15 |
Journal / Publication | IEEE Transactions on Neural Networks and Learning Systems |
Online published | 19 Aug 2024 |
Publication status | Online published - 19 Aug 2024 |
Link(s)
DOI | DOI |
---|---|
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(5ef8b09d-22b9-44b5-9915-0b08d70ac64f).html |
Abstract
Ensemble learning improves the capability of convolutional neural network (CNN)-based discriminators, whose performance is crucial to the quality of generated samples in generative adversarial network (GAN). However, this learning strategy results in a significant increase in the number of parameters along with computational overhead. Meanwhile, the suitable number of discriminators required to enhance GAN performance is still being investigated. To mitigate these issues, we propose an evidential discriminator for GAN (EviD-GAN)-code is available at https://github.com/Tohokantche/EviD-GAN-to learn both the model (epistemic) and data (aleatoric) uncertainties. Specifically, by analyzing three GAN models, the relation between the distribution of discriminator's output and the generator performance has been discovered yielding a general formulation of GAN framework. With the above analysis, the evidential discriminator learns the degree of aleatoric and epistemic uncertainties via imposing a higher order distribution constraint over the likelihood as expressed in the discriminator's output. This constraint can learn an ensemble of likelihood functions corresponding to an infinite set of discriminators. Thus, EviD-GAN aggregates knowledge through the ensemble learning of discriminator that allows the generator to benefit from an informative gradient flow at a negligible computational cost. Furthermore, inspired by the gradient direction in maximum mean discrepancy (MMD)-repulsive GAN, we design an asymmetric regularization scheme for EviD-GAN. Unlike MMD-repulsive GAN that performs at the distribution level, our regularization scheme is based on a pairwise loss function, performs at the sample level, and is characterized by an asymmetric behavior during the training of generator and discriminator. Experimental results show that the proposed evidential discriminator is cost-effective, consistently improves GAN in terms of Frechet inception distance (FID) and inception score (IS), and performs better than other competing models that use multiple discriminators.
Research Area(s)
- Deep learning, evidential learning, generative adversarial networks (GANs), generative modeling
Citation Format(s)
EviD-GAN: Improving GAN With an Infinite Set of Discriminators at Negligible Cost. / Gnanha, Aurele Tohokantche; Cao, Wenming; Mao, Xudong et al.
In: IEEE Transactions on Neural Networks and Learning Systems, 19.08.2024.
In: IEEE Transactions on Neural Networks and Learning Systems, 19.08.2024.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review