Distributed Online Convex Optimization with Statistical Privacy

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Journal / PublicationIEEE Transactions on Neural Networks and Learning Systems
Online published19 Nov 2024
Publication statusOnline published - 19 Nov 2024

Abstract

We focus on the problem of distributed online constrained convex optimization with statistical privacy in multiagent systems. The participating agents aim to collaboratively minimize the cumulative system-wide cost while a passive adversary corrupts some of them. The passive adversary collects information from corrupted agents and attempts to estimate the private information of the uncorrupted ones. In this scenario, we adopt a correlated perturbation mechanism with globally balanced property to cover the local information of agents to enable privacy preservation. This work is the first attempt to integrate such a mechanism into the distributed online (sub)gradient descent algorithm, and then a new algorithm called privacy-preserving distributed online convex optimization (PP-DOCO) is designed. It is proved that the designed algorithm provides a statistical privacy guarantee for uncorrupted agents and achieves an expected regret in O (√K) for convex cost functions, where K denotes the time horizon. Furthermore, an improved expected regret in (log(K)) is derived for strongly convex cost functions. The obtained results are equivalent to the best regret scalings achieved by state-of-the-art algorithms. The privacy bound is established to describe the level of statistical privacy using the notion of Kullback-Leibler divergence (KLD). In addition, we observe that a tradeoff exists between our algorithm's expected regret and statistical privacy. Finally, the effectiveness of our algorithm is validated by simulation results. 

© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.

Research Area(s)

  • Distributed (sub)gradient descent algorithm, online convex optimization (OCO), regret, statistical privacy