An approximation theory approach to learning with ℓ1 regularization

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

14 Scopus Citations
View graph of relations

Author(s)

  • Hong-Yan Wang
  • Quan-Wu Xiao
  • Ding-Xuan Zhou

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)240-258
Journal / PublicationJournal of Approximation Theory
Volume167
Publication statusPublished - Mar 2013

Abstract

Regularization schemes with an ℓ1-regularizer often produce sparse representations for objects in approximation theory, image processing, statistics and learning theory. In this paper, we study a kernel-based learning algorithm for regression generated by regularization schemes associated with the ℓ1-regularizer. We show that convergence rates of the learning algorithm can be independent of the dimension of the input space of the regression problem when the kernel is smooth enough. This confirms the effectiveness of the learning algorithm. Our error analysis is carried out by means of an approximation theory approach using a local polynomial reproduction formula and the norming set condition. © 2012 Elsevier Inc..

Research Area(s)

  • ℓ1-regularizer, Data dependent hypothesis spaces, Kernel-based regularization scheme, Learning theory, Multivariate approximation

Citation Format(s)

An approximation theory approach to learning with ℓ1 regularization. / Wang, Hong-Yan; Xiao, Quan-Wu; Zhou, Ding-Xuan.
In: Journal of Approximation Theory, Vol. 167, 03.2013, p. 240-258.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review