Distributed Online Convex Optimization with Statistical Privacy
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Journal / Publication | IEEE Transactions on Neural Networks and Learning Systems |
Online published | 19 Nov 2024 |
Publication status | Online published - 19 Nov 2024 |
Link(s)
Abstract
We focus on the problem of distributed online constrained convex optimization with statistical privacy in multiagent systems. The participating agents aim to collaboratively minimize the cumulative system-wide cost while a passive adversary corrupts some of them. The passive adversary collects information from corrupted agents and attempts to estimate the private information of the uncorrupted ones. In this scenario, we adopt a correlated perturbation mechanism with globally balanced property to cover the local information of agents to enable privacy preservation. This work is the first attempt to integrate such a mechanism into the distributed online (sub)gradient descent algorithm, and then a new algorithm called privacy-preserving distributed online convex optimization (PP-DOCO) is designed. It is proved that the designed algorithm provides a statistical privacy guarantee for uncorrupted agents and achieves an expected regret in O (√K) for convex cost functions, where K denotes the time horizon. Furthermore, an improved expected regret in O (log(K)) is derived for strongly convex cost functions. The obtained results are equivalent to the best regret scalings achieved by state-of-the-art algorithms. The privacy bound is established to describe the level of statistical privacy using the notion of Kullback-Leibler divergence (KLD). In addition, we observe that a tradeoff exists between our algorithm's expected regret and statistical privacy. Finally, the effectiveness of our algorithm is validated by simulation results.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
Research Area(s)
- Distributed (sub)gradient descent algorithm, online convex optimization (OCO), regret, statistical privacy
Citation Format(s)
Distributed Online Convex Optimization with Statistical Privacy. / Dai, Mingcheng; Ho, Daniel W. C.; Zhang, Baoyong et al.
In: IEEE Transactions on Neural Networks and Learning Systems, 19.11.2024.
In: IEEE Transactions on Neural Networks and Learning Systems, 19.11.2024.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review