TY - JOUR
T1 - Distributed Randomized Gradient-Free Mirror Descent Algorithm for Constrained Optimization
AU - Yu, Zhan
AU - Ho, Daniel W. C.
AU - Yuan, Deming
PY - 2022/2
Y1 - 2022/2
N2 - This paper is concerned with multi-agent optimization problem. A distributed randomized gradient-free mirror descent (DRGFMD) method is developed by introducing a randomized gradient-free oracle in the mirror descent scheme where the non-Euclidean Bregman divergence is used. The classical gradient descent method is generalized without using subgradient information of objective functions. The proposed algorithms are the first distributed non-Euclidean zeroth-order methods which achieve an approximate O(1/√T) T-rate of convergence, recovering the best known optimal rate of distributed nonsmooth constrained convex optimization. Moreover, a decentralized reciprocal weighted averaging (RWA) approximating sequence is first investigated, the convergence for RWA sequence is shown to hold over time-varying graph. Rates of convergence are comprehensively explored for the algorithm with RWA (DRGFMD-RWA). The technique on constructing the decentralized RWA sequence provides new insight in searching for minimizers in distributed algorithms.
AB - This paper is concerned with multi-agent optimization problem. A distributed randomized gradient-free mirror descent (DRGFMD) method is developed by introducing a randomized gradient-free oracle in the mirror descent scheme where the non-Euclidean Bregman divergence is used. The classical gradient descent method is generalized without using subgradient information of objective functions. The proposed algorithms are the first distributed non-Euclidean zeroth-order methods which achieve an approximate O(1/√T) T-rate of convergence, recovering the best known optimal rate of distributed nonsmooth constrained convex optimization. Moreover, a decentralized reciprocal weighted averaging (RWA) approximating sequence is first investigated, the convergence for RWA sequence is shown to hold over time-varying graph. Rates of convergence are comprehensively explored for the algorithm with RWA (DRGFMD-RWA). The technique on constructing the decentralized RWA sequence provides new insight in searching for minimizers in distributed algorithms.
KW - Approximation algorithms
KW - Convergence
KW - Convex functions
KW - Linear programming
KW - Machine learning algorithms
KW - Mirrors
KW - Optimization
UR - http://www.scopus.com/inward/record.url?scp=85105114102&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85105114102&origin=recordpage
U2 - 10.1109/TAC.2021.3075669
DO - 10.1109/TAC.2021.3075669
M3 - 21_Publication in refereed journal
VL - 67
SP - 957
EP - 964
JO - IEEE Transactions on Automatic Control
JF - IEEE Transactions on Automatic Control
SN - 0018-9286
IS - 2
ER -