On the robustness of regularized pairwise learning methods based on kernels

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

17 Scopus Citations
View graph of relations

Author(s)

  • Andreas Christmann
  • Ding-Xuan Zhou

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)1-33
Journal / PublicationJournal of Complexity
Volume37
Publication statusPublished - 1 Dec 2016

Abstract

Regularized empirical risk minimization including support vector machines plays an important role in machine learning theory. In this paper regularized pairwise learning (RPL) methods based on kernels will be investigated. One example is regularized minimization of the error entropy loss which has recently attracted quite some interest from the viewpoint of consistency and learning rates. This paper shows that such RPL methods and also their empirical bootstrap have additionally good statistical robustness properties, if the loss function and the kernel are chosen appropriately. We treat two cases of particular interest: (i) a bounded and non-convex loss function and (ii) an unbounded convex loss function satisfying a certain Lipschitz type condition.

Research Area(s)

  • Machine learning, Pairwise loss function, Regularized risk, Robustness

Citation Format(s)

On the robustness of regularized pairwise learning methods based on kernels. / Christmann, Andreas; Zhou, Ding-Xuan.

In: Journal of Complexity, Vol. 37, 01.12.2016, p. 1-33.

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review