Distributed Randomized Gradient-Free Mirror Descent Algorithm for Constrained Optimization

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

1 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Journal / PublicationIEEE Transactions on Automatic Control
Publication statusOnline published - 27 Apr 2021

Abstract

This paper is concerned with multi-agent optimization problem. A distributed randomized gradient-free mirror descent (DRGFMD) method is developed by introducing a randomized gradient-free oracle in the mirror descent scheme where the non-Euclidean Bregman divergence is used. The classical gradient descent method is generalized without using subgradient information of objective functions. The proposed algorithms are the first distributed non-Euclidean zeroth-order methods which achieve an approximate O(1/√T) T-rate of convergence, recovering the best known optimal rate of distributed nonsmooth constrained convex optimization. Moreover, a decentralized reciprocal weighted averaging (RWA) approximating sequence is first investigated, the convergence for RWA sequence is shown to hold over time-varying graph. Rates of convergence are comprehensively explored for the algorithm with RWA (DRGFMD-RWA). The technique on constructing the decentralized RWA sequence provides new insight in searching for minimizers in distributed algorithms.

Research Area(s)

  • Approximation algorithms, Convergence, Convex functions, Linear programming, Machine learning algorithms, Mirrors, Optimization