Loss Functions of Generative Adversarial Networks (GANs): Opportunities and Challenges

Zhaoqing Pan, Weijie Yu, Bosi Wang, Haoran Xie, Victor S. Sheng, Jianjun Lei*, Sam Kwong

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

79 Citations (Scopus)

Abstract

Recently, the Generative Adversarial Networks (GANs) are fast becoming a key promising research direction in computational intelligence. To improve the modeling ability of GANs, loss functions are used to measure the differences between samples generated by the model and real samples, and make the model learn towards the goal. In this paper, we perform a survey for the loss functions used in GANs, and analyze the pros and cons of these loss functions. Firstly, the basic theory of GANs, and its training mechanism are introduced. Then, the loss functions used in GANs are summarized, including not only the objective functions of GANs, but also the application-oriented GANs’ loss functions. Thirdly, the experiments and analyses of representative loss functions are discussed. Finally, several suggestions on how to choose appropriate loss functions in a specific task are given.
Original languageEnglish
Pages (from-to)500-522
JournalIEEE Transactions on Emerging Topics in Computational Intelligence
Volume4
Issue number4
Online published21 May 2020
DOIs
Publication statusPublished - Aug 2020

Research Keywords

  • computational intelligence
  • Computational modeling
  • deep learning
  • Gallium nitride
  • Generative adversarial networks
  • generative adversarial networks (GANs)
  • Generators
  • Linear programming
  • Loss functions
  • machine learning
  • Task analysis
  • Training

Fingerprint

Dive into the research topics of 'Loss Functions of Generative Adversarial Networks (GANs): Opportunities and Challenges'. Together they form a unique fingerprint.

Cite this