TY - JOUR
T1 - Discrete-Time Algorithms for Distributed Constrained Convex Optimization With Linear Convergence Rates
AU - Liu, Hongzhe
AU - Yu, Wenwu
AU - Chen, Guanrong
PY - 2022/6
Y1 - 2022/6
N2 - In this article, the constrained optimization problem with its global objective function being the sum of convex local cost functions and the constraint being a closed convex set is researched. The aim of this study is to solve the researched problem in a distributed manner, that is, using only local computations and local information exchanges. Toward this end, two gradient-tracking-based distributed optimization algorithms are designed for the considered problem over weight-balanced and weight-unbalanced graphs, respectively. Since the classical projection method is unsuitable to handle the closed convex set constraint under the gradient-tracking framework, a new indirect projection method is employed in this article to deal with the involved closed convex set constraint. Furthermore, two time scales are introduced to complete the convergence analyses. In addition, under the condition that all local cost functions are strongly convex and $L$-smooth, it is proved that the algorithms with well-selected fixed step sizes have linear convergence rates.
AB - In this article, the constrained optimization problem with its global objective function being the sum of convex local cost functions and the constraint being a closed convex set is researched. The aim of this study is to solve the researched problem in a distributed manner, that is, using only local computations and local information exchanges. Toward this end, two gradient-tracking-based distributed optimization algorithms are designed for the considered problem over weight-balanced and weight-unbalanced graphs, respectively. Since the classical projection method is unsuitable to handle the closed convex set constraint under the gradient-tracking framework, a new indirect projection method is employed in this article to deal with the involved closed convex set constraint. Furthermore, two time scales are introduced to complete the convergence analyses. In addition, under the condition that all local cost functions are strongly convex and $L$-smooth, it is proved that the algorithms with well-selected fixed step sizes have linear convergence rates.
KW - Constraint
KW - directed graph
KW - distributed optimization
KW - linear convergence rate
UR - http://www.scopus.com/inward/record.url?scp=85132452747&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85132452747&origin=recordpage
U2 - 10.1109/TCYB.2020.3022240
DO - 10.1109/TCYB.2020.3022240
M3 - RGC 21 - Publication in refereed journal
C2 - 33095723
SN - 2168-2267
VL - 52
SP - 4874
EP - 4885
JO - IEEE Transactions on Cybernetics
JF - IEEE Transactions on Cybernetics
IS - 6
ER -