Characterization of training errors in supervised learning using gradient-based rules

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journal

8 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)1073-1087
Journal / PublicationNeural Networks
Volume6
Issue number8
Publication statusPublished - 1993
Externally publishedYes

Abstract

In the majority of the existing supervised paradigms, a neural network is trained by minimizing an error function using a learning rule. The commonly used learning rules are gradient-based learning rules such as the popular backpropagation algorithm. This paper addresses an important issue on error minimization in supervised learning of neural networks using gradient-based learning rules. This paper characterizes asymptotic properties of training errors for various forms of neural networks in supervised learning and discusses their practical implications for designing neural networks via remarks and examples.

Citation Format(s)

Characterization of training errors in supervised learning using gradient-based rules. / Wang, Jun; Malakooti, B.

In: Neural Networks, Vol. 6, No. 8, 1993, p. 1073-1087.

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journal