Cross-validation based adaptation for regularization operators in learning theory

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

69 Citations (Scopus)

Abstract

We consider learning algorithms induced by regularization methods in the regression setting. We show that previously obtained error bounds for these algorithms, using a priori choices of the regularization parameter, can be attained using a suitable a posteriori choice based on cross-validation. In particular, these results prove adaptation of the rate of convergence of the estimators to the minimax rate induced by the "effective dimension" of the problem. We also show universal consistency for this broad class of methods which includes regularized least-squares, truncated SVD, Landweber iteration and ν-method. © 2010 World Scientific Publishing Company.
Original languageEnglish
Pages (from-to)161-183
JournalAnalysis and Applications
Volume8
Issue number2
DOIs
Publication statusPublished - Apr 2010

Research Keywords

  • Error bounds
  • Learning theory
  • Regression
  • Statistical adaptation

Fingerprint

Dive into the research topics of 'Cross-validation based adaptation for regularization operators in learning theory'. Together they form a unique fingerprint.

Cite this