Learning Theory: From Regression to Classification

Qiang Wu, Yiming Ying, Ding-Xuan Zhou

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

5 Citations (Scopus)

Abstract

We give a brief survey of regularization schemes in learning theory for the purposes of regression and classification, from an approximation theory point of view. First, the classical method of empirical risk minimization is reviewed for regression with a general convex loss function. Next, we explain ideas and methods for the error analysis of regression algorithms generated by Tikhonov regularization schemes associated with reproducing kernel Hilbert spaces. Then binary classification algorithms given by regularization schemes are described with emphasis on support vector machines and noise conditions for distributions. Finally, we mention further topics and some open problems in learning theory. © 2006 Elsevier B.V. All rights reserved.
Original languageEnglish
Pages (from-to)257-290
JournalStudies in Computational Mathematics
Volume12
Issue numberC
DOIs
Publication statusPublished - 2006

Research Keywords

  • 2000 MSC 68T05
  • 62J02
  • classification
  • error analysis
  • Learning theory
  • re- producing kernel Hilbert space
  • regression
  • regularization scheme

Fingerprint

Dive into the research topics of 'Learning Theory: From Regression to Classification'. Together they form a unique fingerprint.

Cite this