Learning Theory : From Regression to Classification
Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 257-290 |
Journal / Publication | Studies in Computational Mathematics |
Volume | 12 |
Issue number | C |
Publication status | Published - 2006 |
Link(s)
Abstract
We give a brief survey of regularization schemes in learning theory for the purposes of regression and classification, from an approximation theory point of view. First, the classical method of empirical risk minimization is reviewed for regression with a general convex loss function. Next, we explain ideas and methods for the error analysis of regression algorithms generated by Tikhonov regularization schemes associated with reproducing kernel Hilbert spaces. Then binary classification algorithms given by regularization schemes are described with emphasis on support vector machines and noise conditions for distributions. Finally, we mention further topics and some open problems in learning theory. © 2006 Elsevier B.V. All rights reserved.
Research Area(s)
- 2000 MSC 68T05, 62J02, classification, error analysis, Learning theory, re- producing kernel Hilbert space, regression, regularization scheme
Citation Format(s)
Learning Theory: From Regression to Classification. / Wu, Qiang; Ying, Yiming; Zhou, Ding-Xuan.
In: Studies in Computational Mathematics, Vol. 12, No. C, 2006, p. 257-290.
In: Studies in Computational Mathematics, Vol. 12, No. C, 2006, p. 257-290.
Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review