Abstract
Generative adversarial networks (GANs) are known to achieve the state-of-the-art performance on various generative tasks, but these results come at the expense of a notoriously difficult training phase. Current training strategies typically draw a connection to optimization theory, whose scope is restricted to local convergence due to the presence of non-convexity. In this work, we tackle the training of GANs by rethinking the problem formulation from the mixed Nash Equilibria (NE) perspective. Via a classical lifting trick, we show that essentially all existing GAN objectives can be relaxed into their mixed strategy forms, whose global optima can be solved via sampling, in contrast to the exclusive use of optimization framework in previous work. We further propose a mean-approximation sampling scheme, which allows to systematically exploit methods for bi-affine games to delineate novel, practical training algorithms of GANs. Finally, we provide experimental evidence that our approach yields comparable or superior results to contemporary training algorithms, and outperforms classical methods such as SGD, Adam, and RMSProp.
| Original language | English |
|---|---|
| Title of host publication | 36th International Conference on Machine Learning (ICML 2019) |
| Editors | Kamalika Chaudhur, Ruslan Salakhutdinov |
| Publisher | International Machine Learning Society (IMLS) |
| Pages | 4972-5000 |
| ISBN (Print) | 9781510886988 |
| Publication status | Published - Jun 2019 |
| Externally published | Yes |
| Event | 36th International Conference on Machine Learning (ICML 2019) - Long Beach, United States Duration: 9 Jun 2019 → 15 Jun 2019 https://icml.cc/ |
Publication series
| Name | Proceedings of Machine Learning Research |
|---|---|
| Volume | 97 |
| ISSN (Print) | 2640-3498 |
Conference
| Conference | 36th International Conference on Machine Learning (ICML 2019) |
|---|---|
| Place | United States |
| City | Long Beach |
| Period | 9/06/19 → 15/06/19 |
| Internet address |