Distributed Optimization over Multi-agent Networks

Project: Research

View graph of relations


This proposal considers the problem of distributed multi-agent optimization (DMAO) over time-varying network. In DMAO, each agent is endowed with a local private objective function, the main task of the network is to collectively minimize the sum of the objective functions of nodes by local computations and communications. DMAO attracted much attention due to its widespread applications in areas such as machine learning, sensor network, smart grids, distributed control system. Stochastic gradient (SG) based optimization methods are the most fundamental methods in convex/nonconvex optimization. In centralized optimization scenario, stochastic sub-gradient based methods have been widely investigated. Some classical stochastic gradient methods such as stochastic gradient descent, stochastic mirror descent, stochastic dual-averaging method, stochastic coordinate descent are well developed in recent years. In these methods, to describe the behavior that the sequence generated from the SG methods approximates the optimal solution, convergence rate is one of the most important subjects. To derive optimal convergence rate under different backgrounds or assumptions for SG based methods is crucial for the research development. For instance, in nonsmooth convex optimization, it is indispensable to design SG based methods that have convergence rate of O(1/) with T iteration steps. However, in many real-world networked systems, the situation is much complicated and the centralized SG methods usually can not be utilized directly to cope with the optimization problem arising from the network. It is highly desirable to develop effective optimization methods for these settings and recover optimal convergence rate in centralized setting. This project aims to develop different types of distributed optimization algorithms for multi-agent optimization over time-varying network. Further, in contrast to former distributed SG based methods which are all independent and identically distributed(i.i.d.) stochastic disturbance on gradient, the influence of non-i.i.d. stochaticity behind the stochastic gradient on the performance of distributed optimization method will be investigated in this project. The convergence performance will be analyzed in application domain. In particular, the developed distributed optimization algorithms in this proposal will be applied to the distributed online regularized regression problem. The theoretical results generated in this proposal, if well accomplished, will lay a solid foundation for the development of a wide range of fields, including stochastic optimization, statistical machine learning. In particular, the investigation of non-i.i.d. SG based distributed optimization algorithms in this project will provide a fundamental technical procedure for future works to improve existing i.i.d. SG based distributed optimization methods.


Project number9043137
Grant typeGRF
Effective start/end date1/01/22 → …