Iterative regularization for learning with convex loss functions

Junhong Lin*, Lorenzo Rosasco, Ding-Xuan Zhou

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

28 Citations (Scopus)

Abstract

We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieved by (early) stopping an empirical iteration. We consider a nonparametric setting, in the framework of reproducing kernel Hilbert spaces, and prove consistency and finite sample bounds on the excess risk under general regularity conditions. Our study provides a new class of efficient regularized learning algorithms and gives insights on the interplay between statistics and optimization in machine learning.
Original languageEnglish
Pages (from-to)1-38
JournalJournal of Machine Learning Research
Volume17
Publication statusPublished - 1 May 2016

Funding

The work described in this paper is supported partially by the Research Grants Council of Hong Kong [Project No. CityU 104012] and by National Natural Science Foundation of China under Grant 11461161006. LR is supported by the FIRB project RBFR12M3AC and the Center for Minds, Brains and Machines (CBMM), funded by NSF STC award CCF-1231216. JL is now within LCSL, MIT & Istituto Italiano di Tecnologia. The authors would like to thank the referees and Dr. Yunlong Feng for their valuable comments.

Research Keywords

  • CLASSIFICATION
  • CONSISTENCY
  • ALGORITHMS
  • ADABOOST

Fingerprint

Dive into the research topics of 'Iterative regularization for learning with convex loss functions'. Together they form a unique fingerprint.

Cite this