Learning with varying insensitive loss

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

11 Scopus Citations
View graph of relations

Author(s)

  • Dao-Hong Xiang
  • Ting Hu
  • Ding-Xuan Zhou

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)2107-2109
Journal / PublicationApplied Mathematics Letters
Volume24
Issue number12
Publication statusPublished - Dec 2011

Abstract

Support vector machines for regression are implemented based on regularization schemes in reproducing kernel Hilbert spaces associated with an -insensitive loss. The insensitive parameter >0 changes with the sample size and plays a crucial role in the learning algorithm. The purpose of this paper is to present a perturbation theorem to show how the medium function of the probability measure for regression (with =0) can be approximated by learning the minimizer of the generalization error with sufficiently small parameter >0. A concrete learning rate is provided under a regularity condition of the medium function and a noise condition of the probability measure. © 2011 Elsevier Ltd. All rights reserved.

Research Area(s)

  • -insensitive loss, Approximation, Regression, Reproducing kernel Hilbert space, Support vector machine

Citation Format(s)

Learning with varying insensitive loss. / Xiang, Dao-Hong; Hu, Ting; Zhou, Ding-Xuan.
In: Applied Mathematics Letters, Vol. 24, No. 12, 12.2011, p. 2107-2109.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review