Concentration estimates for learning with l1-regularizer and data dependent hypothesis spaces

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

95 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)286-302
Journal / PublicationApplied and Computational Harmonic Analysis
Volume31
Issue number2
Publication statusPublished - Sep 2011

Abstract

We consider the regression problem by learning with a regularization scheme in a data dependent hypothesis space and 1-regularizer. The data dependence nature of the kernel-based hypothesis space provides flexibility for the learning algorithm. The regularization scheme is essentially different from the standard one in a reproducing kernel Hilbert space: the kernel is not necessarily symmetric or positive semi-definite and the regularizer is the 1-norm of a function expansion involving samples. The differences lead to additional difficulty in the error analysis. In this paper we apply concentration techniques with 2-empirical covering numbers to improve the learning rates for the algorithm. Sparsity of the algorithm is studied based on our error analysis. We also show that a function space involved in the error analysis induced by the 1-regularizer and non-symmetric kernel has nice behaviors in terms of the 2-empirical covering numbers of its unit ball. © 2011 Elsevier Inc.

Research Area(s)

  • 1-regularizer and sparsity, 2-empirical covering number, Concentration estimate for error analysis, Data dependent hypothesis space, Learning theory