Distributed Constrained Optimization With Delayed Subgradient Information Over Time-Varying Network Under Adaptive Quantization

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Detail(s)

Original languageEnglish
Journal / PublicationIEEE Transactions on Neural Networks and Learning Systems
Online published12 May 2022
Publication statusOnline published - 12 May 2022

Abstract

In this article, we consider a distributed constrained optimization problem with delayed subgradient information over the time-varying communication network, where each agent can only communicate with its neighbors and the communication channel has a limited data rate. We propose an adaptive quantization method to address this problem. A mirror descent algorithm with delayed subgradient information is established based on the theory of Bregman divergence. With a non-Euclidean Bregman projection-based scheme, the proposed method essentially generalizes many previous classical Euclidean projection-based distributed algorithms. Through the proposed adaptive quantization method, the optimal value without any quantization error can be obtained. Furthermore, comprehensive analysis on the convergence of the algorithm is carried out and our results show that the optimal convergence rate O(1/(T)1/2) can be obtained under appropriate conditions. Finally, numerical examples are presented to demonstrate the effectiveness of our results.

Research Area(s)

  • Quantization (signal), Optimization, Mirrors, Convergence, Communication networks, Delay effects, Communication channels, Adaptive quantization, delayed subgradient information, distributed optimization, mirror descent algorithm, MIRROR-DESCENT ALGORITHM, MULTIAGENT OPTIMIZATION, CONSENSUS