Boosted Kernel Ridge Regression : Optimal Learning Rates and Early Stopping

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

16 Scopus Citations
View graph of relations

Author(s)

  • Shao-Bo Lin
  • Yunwen Lei
  • Ding-Xuan Zhou

Detail(s)

Original languageEnglish
Article number46
Journal / PublicationJournal of Machine Learning Research
Volume20
Online publishedFeb 2019
Publication statusPublished - 2019

Link(s)

Abstract

In this paper, we introduce a learning algorithm, boosted kernel ridge regression (BKRR), that combines L2-Boosting with the kernel ridge regression (KRR). We analyze the learning performance of this algorithm in the framework of learning theory. We show that BKRR provides a new bias-variance trade-off via tuning the number of boosting iterations, which is different from KRR via adjusting the regularization parameter. A (semi-)exponential bias-variance trade-off is derived for BKRR, exhibiting a stable relationship between the generalization error and the number of iterations. Furthermore, an adaptive stopping rule is proposed, with which BKRR achieves the optimal learning rate without saturation.

Research Area(s)

  • learning theory, kernel ridge regression, boosting, integral operator, ITERATED TIKHONOV REGULARIZATION, SPECTRAL ALGORITHMS, GRADIENT, APPROXIMATION, PARAMETER, OPERATORS, THEOREM, CHOICE

Citation Format(s)

Boosted Kernel Ridge Regression: Optimal Learning Rates and Early Stopping. / Lin, Shao-Bo; Lei, Yunwen; Zhou, Ding-Xuan.
In: Journal of Machine Learning Research, Vol. 20, 46, 2019.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Download Statistics

No data available