Learning with varying insensitive loss
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 2107-2109 |
Journal / Publication | Applied Mathematics Letters |
Volume | 24 |
Issue number | 12 |
Publication status | Published - Dec 2011 |
Link(s)
Abstract
Support vector machines for regression are implemented based on regularization schemes in reproducing kernel Hilbert spaces associated with an -insensitive loss. The insensitive parameter >0 changes with the sample size and plays a crucial role in the learning algorithm. The purpose of this paper is to present a perturbation theorem to show how the medium function of the probability measure for regression (with =0) can be approximated by learning the minimizer of the generalization error with sufficiently small parameter >0. A concrete learning rate is provided under a regularity condition of the medium function and a noise condition of the probability measure. © 2011 Elsevier Ltd. All rights reserved.
Research Area(s)
- -insensitive loss, Approximation, Regression, Reproducing kernel Hilbert space, Support vector machine
Citation Format(s)
Learning with varying insensitive loss. / Xiang, Dao-Hong; Hu, Ting; Zhou, Ding-Xuan.
In: Applied Mathematics Letters, Vol. 24, No. 12, 12.2011, p. 2107-2109.
In: Applied Mathematics Letters, Vol. 24, No. 12, 12.2011, p. 2107-2109.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review