Abstract
Deep learning has aroused extensive attention due to its great empirical success. The efficiency of the block coordinate descent (BCD) methods has been recently demonstrated in deep neural network (DNN) training. However, theoretical studies on their convergence properties are limited due to the highly nonconvex nature of DNN training. In this paper, we aim at providing a general methodology for provable convergence guarantees for this type of methods. In particular, for most of the commonly used DNN training models involving both two- and three-splitting schemes, we establish the global convergence to a critical point at a rate of O(l/k), where k is the number of iterations. The results extend to general loss functions which have Lipschitz continuous gradients and deep residual networks (ResNets). Our key development adds several new elements to the Kurdyka-Łojasiewicz inequality framework that enables us to carry out the global convergence analysis of BCD in the general scenario of deep learning.
Original language | English |
---|---|
Title of host publication | 36th International Conference on Machine Learning (ICML 2019) |
Publisher | International Machine Learning Society (IMLS) |
Pages | 12685-12711 |
Volume | 19 |
ISBN (Print) | 9781510886988 |
Publication status | Published - Oct 2019 |
Event | 36th International Conference on Machine Learning (ICML 2019) - Long Beach, United States Duration: 9 Jun 2019 → 15 Jun 2019 https://icml.cc/ |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Volume | 97 |
ISSN (Electronic) | 2640-3498 |
Conference
Conference | 36th International Conference on Machine Learning (ICML 2019) |
---|---|
Country/Territory | United States |
City | Long Beach |
Period | 9/06/19 → 15/06/19 |
Internet address |