Abstract
Distributed stochastic optimization, arising in the crossing and integration of traditional stochastic optimization, distributed computing and storage, and network science, has advantages of high efficiency and a low per-iteration computational complexity in resolving large-scale optimization problems. This paper concentrates on resolving a large-scale convex finite-sum optimization problem in a multi-agent system over unbalanced directed networks. To tackle this problem in an efficient way, a distributed consensus optimization algorithm, adopting the push-sum technique and a distributed loopless stochastic variance-reduced gradient (LSVRG) method with uncoordinated triggered probabilities, is developed and named Push-LSVRG-UP. Each agent under this algorithmic framework performs only local computation and communicates only with its neighbors without leaking their private information. The convergence analysis of Push-LSVRG-UP is relied on analyzing the contraction relationships between four error terms associated with the multi-agent system. Theoretical results provide an explicit feasible range of the constant step-size, a linear convergence rate, and an iteration complexity of Push-LSVRG-UP when achieving the globally optimal solution. It is shown that Push-LSVRG-UP achieves the superior characteristics of accelerated linear convergence, fewer storage costs, and a lower per-iteration computational complexity than most existing works. Meanwhile, the introduction of an uncoordinated probabilistic triggered mechanism allows Push-LSVRG-UP to facilitate the independence and flexibility of agents in computing local batch gradients. In simulations, the practicability and improved performance of Push-LSVRG-UP are manifested via resolving two distributed learning problems based on real-world datasets.
© 2022 IEEE.
© 2022 IEEE.
| Original language | English |
|---|---|
| Article number | 9964422 |
| Pages (from-to) | 934-950 |
| Number of pages | 17 |
| Journal | IEEE Transactions on Network Science and Engineering |
| Volume | 10 |
| Issue number | 2 |
| Online published | 28 Nov 2022 |
| DOIs | |
| Publication status | Published - Mar 2023 |
| Externally published | Yes |
Funding
This work was supported in part by the National Natural Science Foundation of China under Grants 62073344, 62173278, and U2141234, in part by Hainan Province Science and Technology Special Fund ZDYF2021GXJS041 and in part by Shanghai Scientific and Technological Innovation Program 19510745200.
Research Keywords
- Distributed optimization
- unbalanced directed networks
- distributed learning problems
- distributed gradient descent algorithms
- multi-agent systems
- variance-reduced sto chastic gradients
Fingerprint
Dive into the research topics of 'Push-LSVRG-UP: Distributed Stochastic Optimization Over Unbalanced Directed Networks With Uncoordinated Triggered Probabilities'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver