Abstract
We investigate decentralized online convex optimization (D-OCO), in which a set of local learners are required to minimize a sequence of global loss functions using _only local computations and communications. Previous studies have established (Formula presented)regret bounds for convex and strongly convex functions respectively, where n is the number of local learners, ρ < 1 is the spectral gap of the communication matrix, and _T is the time horizon. However, there exist large gaps from the existing lower bounds, i.e., (Formula presented) for convex functions and Ω(n) for strongly convex functions. To fill these gaps, in this paper, we first develop a novel D-OCO algorithm that can respectively reduce the regret bounds for convex and strongly convex functions to (Formula presented) and (Formula presented). The primary technique is to design an online accelerated gossip strategy that enjoys a faster average consensus among local learners. Furthermore, by carefully exploiting spectral properties of a specific network topology, we enhance the lower bounds for convex and strongly convex functions to (Formula presented) and (Formula presented), respectively. These results suggest that the regret of our algorithm is nearly optimal in terms of T, n, and ρ for both convex and strongly convex functions. Finally, we propose a projection-free variant of our algorithm to efficiently handle practical applications with complex constraints. Our analysis reveals that the projection-free variant can achieve (Formula presented) and (Formula presented) regret bounds for convex and strongly convex functions with nearly optimal (Formula presented) and (Formula presented) communication rounds, respectively. ©2025 Yuanyu Wan, Tong Wei, Bo Xue, Mingli Song, and Lijun Zhang.
| Original language | English |
|---|---|
| Article number | 135 |
| Number of pages | 43 |
| Journal | Journal of Machine Learning Research |
| Volume | 26 |
| Publication status | Published - Jul 2025 |
Funding
This work was partially supported by National Natural Science Foundation of China (62306275), Zhejiang Provincial Natural Science Foundation of China (LMS25F030002), Yongjiang Talent Introduction Programme (2023A-193-G), and Ningbo Natural Science Foundation (2024J206). The authors are grateful for the anonymous reviewers and the editor for their helpful comments.
Research Keywords
- online convex optimization
- decentralized optimization
- optimal regret
- accelerated gossip strategy
- efficient algorithms
Publisher's Copyright Statement
- This full text is made available under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/
Fingerprint
Dive into the research topics of 'Optimal and Efficient Algorithms for Decentralized Online Convex Optimization'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver