Optimal Learning Rates for Kernel Partial Least Squares

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

6 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)908-933
Journal / PublicationJournal of Fourier Analysis and Applications
Volume24
Issue number3
Online published7 Apr 2017
Publication statusPublished - Jun 2018

Abstract

We study two learning algorithms generated by kernel partial least squares (KPLS) and kernel minimal residual (KMR) methods. In these algorithms, regularization against overfitting is obtained by early stopping, which makes stopping rules crucial to their learning capabilities. We propose a stopping rule for determining the number of iterations based on cross-validation, without assuming a priori knowledge of the underlying probability measure, and show that optimal learning rates can be achieved. Our novel analysis consists of a nice bound for the number of iterations in a priori knowledge-based stopping rule for KMR and a stepping stone from KMR to KPLS. Technical tools include a recently developed integral operator approach based on a second order decomposition of inverse operators and an orthogonal polynomial argument.

Research Area(s)

  • Learning theory, Kernel partial least squares, Kernel minimal residual, Cross validation, REGRESSION, APPROXIMATIONS, ALGORITHMS, OPERATORS