Distributed Randomized Gradient-Free Mirror Descent Algorithm for Constrained Optimization

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

36 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)957-964
Journal / PublicationIEEE Transactions on Automatic Control
Volume67
Issue number2
Online published27 Apr 2021
Publication statusPublished - Feb 2022

Abstract

This paper is concerned with multi-agent optimization problem. A distributed randomized gradient-free mirror descent (DRGFMD) method is developed by introducing a randomized gradient-free oracle in the mirror descent scheme where the non-Euclidean Bregman divergence is used. The classical gradient descent method is generalized without using subgradient information of objective functions. The proposed algorithms are the first distributed non-Euclidean zeroth-order methods which achieve an approximate O(1/√T) T-rate of convergence, recovering the best known optimal rate of distributed nonsmooth constrained convex optimization. Moreover, a decentralized reciprocal weighted averaging (RWA) approximating sequence is first investigated, the convergence for RWA sequence is shown to hold over time-varying graph. Rates of convergence are comprehensively explored for the algorithm with RWA (DRGFMD-RWA). The technique on constructing the decentralized RWA sequence provides new insight in searching for minimizers in distributed algorithms.

Research Area(s)

  • Approximation algorithms, Convergence, Convex functions, Linear programming, Machine learning algorithms, Mirrors, Optimization