Distributed Stochastic Constrained Composite Optimization Over Time-Varying Network With a Class of Communication Noise

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

2 Scopus Citations
View graph of relations



Original languageEnglish
Number of pages13
Journal / PublicationIEEE Transactions on Cybernetics
Online published24 Nov 2021
Publication statusOnline published - 24 Nov 2021


This article is concerned with the distributed stochastic multiagent-constrained optimization problem over a time-varying network with a class of communication noise. This article considers the problem in composite optimization setting, which is more general in the literature of noisy network optimization. It is noteworthy that the mainstream existing methods for noisy network optimization are Euclidean projection based. Based on the Bregman projection-based mirror descent scheme, we present a non-Euclidean method and investigate their convergence behavior. This method is the distributed stochastic composite mirror descent type method (DSCMD-N), which provides a more general algorithm framework. Some new error bounds for DSCMD-N are obtained. To the best of our knowledge, this is the first work to analyze and derive convergence rates of optimization algorithm in noisy network optimization. We also show that an optimal rate of O(1√T) in nonsmooth convex optimization can be obtained for the proposed method under appropriate communication noise condition. Moveover, novel convergence results are comprehensively derived in expectation convergence, high probability convergence, and almost surely sense.

Research Area(s)

  • Optimization, Convergence, Noise measurement, Mirrors, Stochastic processes, Optimization methods, Linear programming, Communication noise, composite optimization, distributed optimization, mirror descent, multiagent network, GRADIENT ALGORITHM, CONVEX-FUNCTIONS, STATE ESTIMATION, NEURAL-NETWORKS, CONSENSUS