Stochastic Continuous Submodular Maximization : Boosting via Non-oblivious Function

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationProceedings of the 39th International Conference on Machine Learning
EditorsKamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, Sivan Sabato
PublisherPMLR
Pages26116-26134
Number of pages19
Publication statusPublished - Jul 2022

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR
Volume162
ISSN (Print)2640-3498

Conference

Title39th International Conference on Machine Learning (ICML 2022)
LocationVirtual
PlaceUnited States
CityBaltimore
Period17 - 23 July 2022

Abstract

In this paper, we revisit Stochastic Continuous Submodular Maximization in both offline and online settings, which can benefit wide applications in machine learning and operations research areas. We present a boosting framework covering gradient ascent and online gradient ascent. The fundamental ingredient of our methods is a novel non-oblivious function F derived from a factor-revealing optimization problem, whose any stationary point provides a (1 - e-γ)-approximation to the global maximum of the γ-weakly DR-submodular objective function f ε C1,1(Χ). Under the offline scenario, we propose a boosting gradient ascent method achieving (1 - e-γε2)$-approximation after O(1/ε2) iterations, which improves the (γ2/1+γ2) approximation ratio of the classical gradient ascent algorithm. In the online setting, for the first time we consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious function F. Meanwhile, we verify that this boosting online algorithm achieves a regret of O(√D) against a (1 - e-γ)-approximation to the best feasible solution in hindsight, where D is the sum of delays of gradient feedback. To the best of our knowledge, this is the first result to obtain O(√T) regret against a (1 - e-γ)-approximation with O(1) gradient inquiry at each time step, when no delay exists, i.e., D = T. Finally, numerical experiments demonstrate the effectiveness of our boosting methods.

Citation Format(s)

Stochastic Continuous Submodular Maximization : Boosting via Non-oblivious Function. / Zhang, Qixin; Deng, Zengde; Chen, Zaiyi et al.

Proceedings of the 39th International Conference on Machine Learning. ed. / Kamalika Chaudhuri; Stefanie Jegelka; Le Song; Csaba Szepesvari; Gang Niu; Sivan Sabato. PMLR, 2022. p. 26116-26134 (Proceedings of Machine Learning Research; Vol. 162).

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review