Generalized back-propagation algorithm for faster convergence

S. C. Ng, S. H. Leung, A. Luk

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

6 Citations (Scopus)

Abstract

The conventional back-propagation algorithm is basically a gradient-descent method, it has the problems of local minima and slow convergence. A new generalized back-propagation algorithm which can effectively speed up the convergence rate and reduce the chance of being trapped in local minima is introduced in this paper. The new back-propagation algorithm is to change the derivative of the activation function so as to magnify the backward propagated error signal, thus the convergence rate can be accelerated and the local minimum can be escaped.
Original languageEnglish
Title of host publicationIEEE International Conference on Neural Networks - Conference Proceedings
PublisherIEEE
Pages409-413
Volume1
Publication statusPublished - 1996
EventProceedings of the 1996 IEEE International Conference on Neural Networks, ICNN. Part 1 (of 4) - Washington, DC, USA
Duration: 3 Jun 19966 Jun 1996

Publication series

Name
Volume1

Conference

ConferenceProceedings of the 1996 IEEE International Conference on Neural Networks, ICNN. Part 1 (of 4)
CityWashington, DC, USA
Period3/06/966/06/96

Fingerprint

Dive into the research topics of 'Generalized back-propagation algorithm for faster convergence'. Together they form a unique fingerprint.

Cite this