On Linear Convergence of ADMM for Decentralized Quantile Regression

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

2 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)3945-3955
Journal / PublicationIEEE Transactions on Signal Processing
Volume71
Online published23 Oct 2023
Publication statusPublished - 2023

Abstract

The alternating direction method of multipliers (ADMM) is a natural method of choice for distributed parameter learning. For smooth and strongly convex consensus optimization problems, it has been shown that ADMM and some of its variants enjoy linear convergence in the distributed setting, much like in the traditional non-distributed setting. The optimization problem associated with parameter estimation in quantile regression is neither smooth nor strongly convex (although is convex) and thus it seems can only have sublinear convergence at best. Although this insinuates slow convergence, we show that, if the local sample size is sufficiently large compared to parameter dimension and network size, distributed estimation in quantile regression actually exhibits linear convergence up to the statistical precision, the precise meaning of which will be explained in the text. © 2023 IEEE.

Research Area(s)

  • ADMM, linear convergence, proximal operator