Gradient learning in a classification setting by gradient descent

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

10 Scopus Citations
View graph of relations

Author(s)

  • Jia Cai
  • Hongyan Wang
  • Ding-Xuan Zhou

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)674-692
Journal / PublicationJournal of Approximation Theory
Volume161
Issue number2
Publication statusPublished - Dec 2009

Abstract

Learning gradients is one approach for variable selection and feature covariation estimation when dealing with large data of many variables or coordinates. In a classification setting involving a convex loss function, a possible algorithm for gradient learning is implemented by solving convex quadratic programming optimization problems induced by regularization schemes in reproducing kernel Hilbert spaces. The complexity for such an algorithm might be very high when the number of variables or samples is huge. We introduce a gradient descent algorithm for gradient learning in classification. The implementation of this algorithm is simple and its convergence is elegantly studied. Explicit learning rates are presented in terms of the regularization parameter and the step size. Deep analysis for approximation by reproducing kernel Hilbert spaces under some mild conditions on the probability measure for sampling allows us to deal with a general class of convex loss functions. © 2008 Elsevier Inc. All rights reserved.

Research Area(s)

  • Approximation error, Classification algorithm with convex loss, Gradient descent, Learning theory, Reproducing kernel Hilbert space

Citation Format(s)

Gradient learning in a classification setting by gradient descent. / Cai, Jia; Wang, Hongyan; Zhou, Ding-Xuan.
In: Journal of Approximation Theory, Vol. 161, No. 2, 12.2009, p. 674-692.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review