On the robustness of regularized pairwise learning methods based on kernels
Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 1-33 |
Journal / Publication | Journal of Complexity |
Volume | 37 |
Publication status | Published - 1 Dec 2016 |
Link(s)
Abstract
Regularized empirical risk minimization including support vector machines plays an important role in machine learning theory. In this paper regularized pairwise learning (RPL) methods based on kernels will be investigated. One example is regularized minimization of the error entropy loss which has recently attracted quite some interest from the viewpoint of consistency and learning rates. This paper shows that such RPL methods and also their empirical bootstrap have additionally good statistical robustness properties, if the loss function and the kernel are chosen appropriately. We treat two cases of particular interest: (i) a bounded and non-convex loss function and (ii) an unbounded convex loss function satisfying a certain Lipschitz type condition.
Research Area(s)
- Machine learning, Pairwise loss function, Regularized risk, Robustness
Citation Format(s)
On the robustness of regularized pairwise learning methods based on kernels. / Christmann, Andreas; Zhou, Ding-Xuan.
In: Journal of Complexity, Vol. 37, 01.12.2016, p. 1-33.Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review