Decentralized Learning: Statistics Meets Optimization

Project: Research

View graph of relations

Description

Decentralized optimization over a network defined by a directed or undirected network is a research area that has received significant attention in recent years. Compared to the simpler master-worker architecture, using a general network topology avoids the communication bottleneck of the master node, and the system is more robust to the failure of some nodes. Although convergence of gradient-based method in decentralized optimization has been intensively studied, demonstrating convergence rate of the iterates to the consensus solution, it is less known how the iterates converge to the population parameter when the objective function is constructed from a statistical model based on a random sample. It turns out there are some important differences between the convergence to the consensus solution and convergence to the population parameters. For example, for some statistical models with a non-smooth and non-strong-convex loss function, we can still show linear convergence up to the statistical precision. This is in stark contrast to the pure optimization problem where only sublinear convergence is possible if the objective function is not strongly convex. We expect that successful completion of the proposed research will significantly expand the existing knowledge in this area, and lead to a more complete theoretical understanding of decentralized learning when optimization and statistics come together. 

Detail(s)

Project number9043717
Grant typeGRF
StatusActive
Effective start/end date1/10/24 → …