Abstract
In this brief, we consider the multiagent optimization over a network where multiple agents try to minimize a sum of nonsmooth but Lipschitz continuous functions, subject to a convex state constraint set. The underlying network topology is modeled as time varying. We propose a randomized derivative-free method, where in each update, the random gradient-free oracles are utilized instead of the subgradients (SGs). In contrast to the existing work, we do not require that agents are able to compute the SGs of their objective functions. We establish the convergence of the method to an approximate solution of the multiagent optimization problem within the error level depending on the smoothing parameter and the Lipschitz constant of each agent's objective function. Finally, a numerical example is provided to demonstrate the effectiveness of the method.
| Original language | English |
|---|---|
| Article number | 6870494 |
| Pages (from-to) | 1342-1347 |
| Journal | IEEE Transactions on Neural Networks and Learning Systems |
| Volume | 26 |
| Issue number | 6 |
| Online published | 1 Aug 2014 |
| DOIs | |
| Publication status | Published - Jun 2015 |
Research Keywords
- Average consensus
- Distributed multiagent system
- Distributed optimization
- Networked control systems
Fingerprint
Dive into the research topics of 'Randomized gradient-free method for multiagent optimization over time-varying networks'. Together they form a unique fingerprint.Projects
- 1 Finished
-
GRF: Fault Analysis in Distributed Networked Systems: Diagnosis, Control and Recovery
HO, W. C. D. (Principal Investigator / Project Coordinator)
1/12/13 → 6/02/17
Project: Research
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver