Stochastic Continuous Submodular Maximization : Boosting via Non-oblivious Function
Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45) › 32_Refereed conference paper (with ISBN/ISSN) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Proceedings of the 39th International Conference on Machine Learning |
Editors | Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, Sivan Sabato |
Publisher | PMLR |
Pages | 26116-26134 |
Number of pages | 19 |
Publication status | Published - Jul 2022 |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Publisher | PMLR |
Volume | 162 |
ISSN (Print) | 2640-3498 |
Conference
Title | 39th International Conference on Machine Learning (ICML 2022) |
---|---|
Location | Virtual |
Place | United States |
City | Baltimore |
Period | 17 - 23 July 2022 |
Link(s)
Document Link | Links
|
---|---|
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(2d2f6450-24ee-4720-8482-d64488407228).html |
Abstract
In this paper, we revisit Stochastic Continuous Submodular Maximization in both offline and online settings, which can benefit wide applications in machine learning and operations research areas. We present a boosting framework covering gradient ascent and online gradient ascent. The fundamental ingredient of our methods is a novel non-oblivious function F derived from a factor-revealing optimization problem, whose any stationary point provides a (1 - e-γ)-approximation to the global maximum of the γ-weakly DR-submodular objective function f ε C1,1L (Χ). Under the offline scenario, we propose a boosting gradient ascent method achieving (1 - e-γ- ε2)$-approximation after O(1/ε2) iterations, which improves the (γ2/1+γ2) approximation ratio of the classical gradient ascent algorithm. In the online setting, for the first time we consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious function F. Meanwhile, we verify that this boosting online algorithm achieves a regret of O(√D) against a (1 - e-γ)-approximation to the best feasible solution in hindsight, where D is the sum of delays of gradient feedback. To the best of our knowledge, this is the first result to obtain O(√T) regret against a (1 - e-γ)-approximation with O(1) gradient inquiry at each time step, when no delay exists, i.e., D = T. Finally, numerical experiments demonstrate the effectiveness of our boosting methods.
Citation Format(s)
Stochastic Continuous Submodular Maximization : Boosting via Non-oblivious Function. / Zhang, Qixin; Deng, Zengde; Chen, Zaiyi et al.
Proceedings of the 39th International Conference on Machine Learning. ed. / Kamalika Chaudhuri; Stefanie Jegelka; Le Song; Csaba Szepesvari; Gang Niu; Sivan Sabato. PMLR, 2022. p. 26116-26134 (Proceedings of Machine Learning Research; Vol. 162).Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45) › 32_Refereed conference paper (with ISBN/ISSN) › peer-review