Learning by nonsymmetric kernels with data dependent spaces and l1-regularizer

Quan-Wu Xiao, Ding-Xuan Zhou

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

57 Citations (Scopus)

Abstract

We study a learning algorithm for regression. The algorithm is a regularization scheme with l1 regularizer stated in a hypothesis space trained from data or samples by a nonsymmetric kernel. The data dependent nature of the algorithm leads to an extra error term called hypothesis error, which is essentially different from regularization schemes with data independent hypothesis spaces. By dealing with regularization error, sample error and hypothesis error, we estimate the total error in terms of properties of the kernel, the input space, the marginal distribution, and the regression function of the regression problem. Learning rates are derived by choosing suitable values of the regularization parameter. An improved error decomposition approach is used in our data dependent setting.
Original languageEnglish
Pages (from-to)1821-1836
JournalTaiwanese Journal of Mathematics
Volume14
Issue number5
DOIs
Publication statusPublished - Oct 2010

Research Keywords

  • Data dependent hypothesis spaces
  • Error analysis
  • Learning theory
  • Nonsymmetric kernel
  • Regularization scheme

Fingerprint

Dive into the research topics of 'Learning by nonsymmetric kernels with data dependent spaces and l1-regularizer'. Together they form a unique fingerprint.

Cite this