Learning Theory : From Regression to Classification

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

4 Scopus Citations
View graph of relations

Author(s)

  • Qiang Wu
  • Yiming Ying
  • Ding-Xuan Zhou

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)257-290
Journal / PublicationStudies in Computational Mathematics
Volume12
Issue numberC
Publication statusPublished - 2006

Abstract

We give a brief survey of regularization schemes in learning theory for the purposes of regression and classification, from an approximation theory point of view. First, the classical method of empirical risk minimization is reviewed for regression with a general convex loss function. Next, we explain ideas and methods for the error analysis of regression algorithms generated by Tikhonov regularization schemes associated with reproducing kernel Hilbert spaces. Then binary classification algorithms given by regularization schemes are described with emphasis on support vector machines and noise conditions for distributions. Finally, we mention further topics and some open problems in learning theory. © 2006 Elsevier B.V. All rights reserved.

Research Area(s)

  • 2000 MSC 68T05, 62J02, classification, error analysis, Learning theory, re- producing kernel Hilbert space, regression, regularization scheme

Citation Format(s)

Learning Theory: From Regression to Classification. / Wu, Qiang; Ying, Yiming; Zhou, Ding-Xuan.
In: Studies in Computational Mathematics, Vol. 12, No. C, 2006, p. 257-290.

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review