New Algorithms for Continuous DR-submodular Maximization in Dynamic and Decentralized Environments: Simple Implementation, Tight Approximation and High Efficiency
在動態和去中心化環境下,關於連續次模函數最大化的新算法:簡單實現,緊緻逼近和高效率
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 12 Aug 2024 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(c4369c95-2be9-4b2f-9b26-5fe98c75b3f5).html |
---|---|
Other link(s) | Links |
Abstract
Continuous DR-submodular maximization has drawn wide attention in the past decades since it finds numerous real-world applications in machine learning, operations research, and statistics, such as non-definite quadratic programming, revenue maximization, viral marketing, to name a few. Previously, a myriad of the existing literature has studied the centralized and static continuous DR- submodular maximization problems. However, in many real scenarios, the objectives not only are stored in a network of computing nodes but also vary with time. Given these practical challenges and demands, this thesis revolves around the following three issues:
1. Can we boost the Projected Gradient Ascent(PGA) methods to achieve the optimal (1-1/e)- approximation ratio for monotone continuous DR-submodular maximization problems? As far as we know, the standard PGA only can attain a sub-optimal (1/2)-approximation ratio for monotone continuous DR-submodular maximization.
2.Whether it is possible to design efficient algorithms for online continuous DR-submodular maximization? Previous studies about online DR-submodular maximization highly rely on some complicated tricks, such as meta action and block procedure, which will incur a large number of time-consuming gradient evaluations in each round. Furthermore, when considering the non-monotone continuous DR-submodular objectives, the existing online algorithm requires discretizing the original constrained domain and lifting the n-dimensional subroutine problem into a solvable linear programming in a higher (M*n)-dimensional space which will bring a heavy computation burden when time horizon is large.
3. Can we design communication-efficient algorithms for the decentralized online continuous DR- submodular maximization problem which guarantee the tight (1 − 1/e) approximation ratio? The existing methods about decentralized online continuous DR-submodular maximization need to inquire a significant number of stochastic gradient estimates for each local objective function and then passes these gradient messages over the network one by one, which will result in a large computation and communication overhead.
The main contribution of this thesis are three-fold:
Firstly, we design a novel non-oblivious function from a factor-revealing optimization problem, whose any stationary point provides an optimal (1-1/e)-approximation to the global maximum of the original DR-submodular objective function. Under the offline scenario, we utilize this non- oblivious function to devise a boosting gradient ascent method achieving (1-1/e)- approximation, which improves the (1/2)-approximation ratio of the classical gradient ascent algorithm.
Secondly, we present Meta-Measured-Frank-Wolfe algorithm with a 1/e-regret of O(T 1/2) at the cost of T 3/2 stochastic gradient evaluations per round, which is the first algorithm to obtain 1/e- regret of O(T 1/2) for the online non-monotone continuous DR-submodular maximization problem over a down-closed convex set. Moreover, in sharp contrast with previous algorithm, Meta-Measured-Frank-Wolfe algorithm only depends on the simple online linear oracle without discretization, lifting, or rounding operations. Then, we propose the Mono-Measured-Frank-Wolfe algorithm, which reduces the per-function stochastic gradient evaluations from T 3/2 to 1 and achieves a 1/e-regret bound of O(T 4/5). Next, we extend Mono-Measured-Frank-Wolfe to the bandit setting and propose the Bandit-Measured-Frank-Wolfe algorithm which attains a 1/e-regret bound of O(T 8/9). To the best of our knowledge, these two online algorithms are the first sub- linear regret algorithms to explore the one-shot and bandit setting for online non-monotone continuous DR-submodular maximization problem over a down-closed convex set.
Lastly, we propose two communication-efficient decentralized online algorithms for the monotone continuous DR-submodular maximization problem, both of which reduce the number of per- function gradient evaluations and per-round communication complexity from T 3/2 to 1. At first, we present the first one-shot and projection-free decentralized online algorithm for monotone continuous DR-submodular maximization named One-shot Decentralized MetaFrank-Wolfe, which achieves a (1-1/e)-regret bound of O(T 4/5). Next, inspired by the non-oblivious boosting function we propose in the first point, we propose the Decentralized Online Boosting Gradient Ascent algorithm, which attains a (1-1/e)-regret of O(T 1/2). As far as we know, this is the first result to obtain the optimal regret against a (1-1/e)-approximation with only one gradient inquiry for each local objective function per step.
1. Can we boost the Projected Gradient Ascent(PGA) methods to achieve the optimal (1-1/e)- approximation ratio for monotone continuous DR-submodular maximization problems? As far as we know, the standard PGA only can attain a sub-optimal (1/2)-approximation ratio for monotone continuous DR-submodular maximization.
2.Whether it is possible to design efficient algorithms for online continuous DR-submodular maximization? Previous studies about online DR-submodular maximization highly rely on some complicated tricks, such as meta action and block procedure, which will incur a large number of time-consuming gradient evaluations in each round. Furthermore, when considering the non-monotone continuous DR-submodular objectives, the existing online algorithm requires discretizing the original constrained domain and lifting the n-dimensional subroutine problem into a solvable linear programming in a higher (M*n)-dimensional space which will bring a heavy computation burden when time horizon is large.
3. Can we design communication-efficient algorithms for the decentralized online continuous DR- submodular maximization problem which guarantee the tight (1 − 1/e) approximation ratio? The existing methods about decentralized online continuous DR-submodular maximization need to inquire a significant number of stochastic gradient estimates for each local objective function and then passes these gradient messages over the network one by one, which will result in a large computation and communication overhead.
The main contribution of this thesis are three-fold:
Firstly, we design a novel non-oblivious function from a factor-revealing optimization problem, whose any stationary point provides an optimal (1-1/e)-approximation to the global maximum of the original DR-submodular objective function. Under the offline scenario, we utilize this non- oblivious function to devise a boosting gradient ascent method achieving (1-1/e)- approximation, which improves the (1/2)-approximation ratio of the classical gradient ascent algorithm.
Secondly, we present Meta-Measured-Frank-Wolfe algorithm with a 1/e-regret of O(T 1/2) at the cost of T 3/2 stochastic gradient evaluations per round, which is the first algorithm to obtain 1/e- regret of O(T 1/2) for the online non-monotone continuous DR-submodular maximization problem over a down-closed convex set. Moreover, in sharp contrast with previous algorithm, Meta-Measured-Frank-Wolfe algorithm only depends on the simple online linear oracle without discretization, lifting, or rounding operations. Then, we propose the Mono-Measured-Frank-Wolfe algorithm, which reduces the per-function stochastic gradient evaluations from T 3/2 to 1 and achieves a 1/e-regret bound of O(T 4/5). Next, we extend Mono-Measured-Frank-Wolfe to the bandit setting and propose the Bandit-Measured-Frank-Wolfe algorithm which attains a 1/e-regret bound of O(T 8/9). To the best of our knowledge, these two online algorithms are the first sub- linear regret algorithms to explore the one-shot and bandit setting for online non-monotone continuous DR-submodular maximization problem over a down-closed convex set.
Lastly, we propose two communication-efficient decentralized online algorithms for the monotone continuous DR-submodular maximization problem, both of which reduce the number of per- function gradient evaluations and per-round communication complexity from T 3/2 to 1. At first, we present the first one-shot and projection-free decentralized online algorithm for monotone continuous DR-submodular maximization named One-shot Decentralized MetaFrank-Wolfe, which achieves a (1-1/e)-regret bound of O(T 4/5). Next, inspired by the non-oblivious boosting function we propose in the first point, we propose the Decentralized Online Boosting Gradient Ascent algorithm, which attains a (1-1/e)-regret of O(T 1/2). As far as we know, this is the first result to obtain the optimal regret against a (1-1/e)-approximation with only one gradient inquiry for each local objective function per step.