Convergence of the generalized back-propagation algorithm with constant learning rates

S. C. Ng, S. H. Leung, A. Luk

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

5 Citations (Scopus)

Abstract

A new generalized back-propagation algorithm which can effectively speed up the convergence rate and reduce the chance of being trapped in local minima has been recently introduced in [1]. In this paper, we will analyze the convergence of the generalized back-propagation algorithm. The weight sequences in generalized back-propagation algorithm can be approximated by a certain ordinary differential equation (ODE). When the learning rate tends to zero, the interpolated weight sequences of generalized back-propagation converge weakly to the solution of associated ODE.
Original languageEnglish
Title of host publicationThe 11998 IEEE International Joint Conference on Neural Networks Proceedings
Subtitle of host publicationIEEE World Congress con Computational Intelligence
PublisherIEEE
Pages1090-1094
Volume2
ISBN (Print)0-7803-4859-1
DOIs
Publication statusPublished - May 1998
Event1998 IEEE International Joint Conference on Neural Networks (IJCNN 1998) - Anchorage, AK, USA
Duration: 4 May 19989 May 1998

Publication series

Name
ISSN (Print)1098-7576

Conference

Conference1998 IEEE International Joint Conference on Neural Networks (IJCNN 1998)
CityAnchorage, AK, USA
Period4/05/989/05/98

Fingerprint

Dive into the research topics of 'Convergence of the generalized back-propagation algorithm with constant learning rates'. Together they form a unique fingerprint.

Cite this