Learning Theory of Randomized Sparse Kaczmarz Method

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

5 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)547-574
Journal / PublicationSIAM Journal on Imaging Sciences
Volume11
Issue number1
Publication statusPublished - 20 Feb 2018

Abstract

In this paper we propose an online learning algorithm, a general randomized sparse Kaczmarz method, for generating sparse approximate solutions to linear systems and present learning theory analysis for its convergence. Under a mild assumption covering the case of noisy random measurements in the sampling process or nonlinear regression function, we show that the algorithm converges in expectation if and only if the step size sequence {ηt}tεℕ satisfies limt→∞ηt= 0 and ∑ t=1ηt = ∞. Convergence rates are also obtained and linear convergence is shown to be impossible under the assumption of positive variance of the sampling process. A sufficient condition for almost sure convergence is derived with an additional restriction ∑ t=1η2 t<∞. Our novel analysis is performed by interpreting the randomized sparse Kaczmarz method as a special online mirror descent algorithm with a nondifferentiable mirror map and using the Bregman distance. The sufficient and necessary conditions are derived by establishing a restricted variant of strong convexity for the involved generalization error and using the special structures of the soft-thresholding operator.

Research Area(s)

  • Bregman distance, Learning theory, Linearized Bregman iteration, Online learning, Randomized sparse Kaczmarz algorithm