An approximation theory approach to learning with ℓ1 regularization

Hong-Yan Wang, Quan-Wu Xiao, Ding-Xuan Zhou

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

17 Citations (Scopus)

Abstract

Regularization schemes with an ℓ1-regularizer often produce sparse representations for objects in approximation theory, image processing, statistics and learning theory. In this paper, we study a kernel-based learning algorithm for regression generated by regularization schemes associated with the ℓ1-regularizer. We show that convergence rates of the learning algorithm can be independent of the dimension of the input space of the regression problem when the kernel is smooth enough. This confirms the effectiveness of the learning algorithm. Our error analysis is carried out by means of an approximation theory approach using a local polynomial reproduction formula and the norming set condition. © 2012 Elsevier Inc..
Original languageEnglish
Pages (from-to)240-258
JournalJournal of Approximation Theory
Volume167
DOIs
Publication statusPublished - Mar 2013

Research Keywords

  • ℓ1-regularizer
  • Data dependent hypothesis spaces
  • Kernel-based regularization scheme
  • Learning theory
  • Multivariate approximation

Fingerprint

Dive into the research topics of 'An approximation theory approach to learning with ℓ1 regularization'. Together they form a unique fingerprint.

Cite this