An empirical feature-based learning algorithm producing sparse approximations

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

19 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)389-400
Journal / PublicationApplied and Computational Harmonic Analysis
Volume32
Issue number3
Publication statusPublished - May 2012

Abstract

A learning algorithm for regression is studied. It is a modified kernel projection machine (Blanchard et al., 2004 [2]) in the form of a least square regularization scheme with ℓ1-regularizer in a data dependent hypothesis space based on empirical features (constructed by a reproducing kernel and the learning data). The algorithm has three advantages. First, it does not involve any optimization process. Second, it produces sparse representations with respect to empirical features under a mild condition, without assuming sparsity in terms of any basis or system. Third, the output function converges to the regression function in the reproducing kernel Hilbert space at a satisfactory rate. Our error analysis does not require any sparsity assumption about the underlying regression function. © 2011 Elsevier Inc. All rights reserved.

Research Area(s)

  • ℓ 1-regularizer, Empirical features, Learning theory, Reproducing kernel Hilbert space, Sparsity