Fast Convergent Generalized Back-Propagation Algorithm with Constant Learning Rate

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journal

27 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)13-23
Journal / PublicationNeural Processing Letters
Volume9
Issue number1
Publication statusPublished - 1999

Abstract

The conventional back-propagation algorithm is basically a gradient-descent method, it has the problems of local minima and slow convergence. A new generalized back-propagation algorithm which can effectively speed up the convergence rate and reduce the chance of being trapped in local minima is introduced. The new back-propagation algorithm is to change the derivative of the activation function so as to magnify the backward propagated error signal, thus the convergence rate can be accelerated and the local minimum can be escaped. In this letter, we also investigate the convergence of the generalized back-propagation algorithm with constant learning rate. The weight sequences in generalized back-propagation algorithm can be approximated by a certain ordinary differential equation (ODE). When the learning rate tends to zero, the interpolated weight sequences of generalized back-propagation converge weakly to the solution of associated ODE.

Research Area(s)

  • Constant learning rate, Convergence, Feedforward neural networks, Generalized back-propagation, Gradient descent algorithm