New Algorithms for Continuous DRsubmodular Maximization in Dynamic and Decentralized Environments: Simple Implementation, Tight Approximation and High Efficiency
在動態和去中心化環境下，關於連續次模函數最大化的新算法：簡單實現，緊緻逼近和高效率
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution  

Supervisors/Advisors 

Award date  12 Aug 2024 
Link(s)
Permanent Link  https://scholars.cityu.edu.hk/en/theses/theses(c4369c952be94b2f9b265fe98c75b3f5).html 

Other link(s)  Links 
Abstract
Continuous DRsubmodular maximization has drawn wide attention in the past decades since it finds numerous realworld applications in machine learning, operations research, and statistics, such as nondefinite quadratic programming, revenue maximization, viral marketing, to name a few. Previously, a myriad of the existing literature has studied the centralized and static continuous DR submodular maximization problems. However, in many real scenarios, the objectives not only are stored in a network of computing nodes but also vary with time. Given these practical challenges and demands, this thesis revolves around the following three issues:
1. Can we boost the Projected Gradient Ascent(PGA) methods to achieve the optimal (11/e) approximation ratio for monotone continuous DRsubmodular maximization problems? As far as we know, the standard PGA only can attain a suboptimal (1/2)approximation ratio for monotone continuous DRsubmodular maximization.
2.Whether it is possible to design efficient algorithms for online continuous DRsubmodular maximization? Previous studies about online DRsubmodular maximization highly rely on some complicated tricks, such as meta action and block procedure, which will incur a large number of timeconsuming gradient evaluations in each round. Furthermore, when considering the nonmonotone continuous DRsubmodular objectives, the existing online algorithm requires discretizing the original constrained domain and lifting the ndimensional subroutine problem into a solvable linear programming in a higher (M*n)dimensional space which will bring a heavy computation burden when time horizon is large.
3. Can we design communicationefficient algorithms for the decentralized online continuous DR submodular maximization problem which guarantee the tight (1 − 1/e) approximation ratio? The existing methods about decentralized online continuous DRsubmodular maximization need to inquire a significant number of stochastic gradient estimates for each local objective function and then passes these gradient messages over the network one by one, which will result in a large computation and communication overhead.
The main contribution of this thesis are threefold:
Firstly, we design a novel nonoblivious function from a factorrevealing optimization problem, whose any stationary point provides an optimal (11/e)approximation to the global maximum of the original DRsubmodular objective function. Under the offline scenario, we utilize this non oblivious function to devise a boosting gradient ascent method achieving (11/e) approximation, which improves the (1/2)approximation ratio of the classical gradient ascent algorithm.
Secondly, we present MetaMeasuredFrankWolfe algorithm with a 1/eregret of O(T ^{1/2}) at the cost of T ^{3/2} stochastic gradient evaluations per round, which is the first algorithm to obtain 1/e regret of O(T ^{1/2}) for the online nonmonotone continuous DRsubmodular maximization problem over a downclosed convex set. Moreover, in sharp contrast with previous algorithm, MetaMeasuredFrankWolfe algorithm only depends on the simple online linear oracle without discretization, lifting, or rounding operations. Then, we propose the MonoMeasuredFrankWolfe algorithm, which reduces the perfunction stochastic gradient evaluations from T ^{3/2} to 1 and achieves a 1/eregret bound of O(T ^{4/5}). Next, we extend MonoMeasuredFrankWolfe to the bandit setting and propose the BanditMeasuredFrankWolfe algorithm which attains a 1/eregret bound of O(T ^{8/9}). To the best of our knowledge, these two online algorithms are the first sub linear regret algorithms to explore the oneshot and bandit setting for online nonmonotone continuous DRsubmodular maximization problem over a downclosed convex set.
Lastly, we propose two communicationefficient decentralized online algorithms for the monotone continuous DRsubmodular maximization problem, both of which reduce the number of per function gradient evaluations and perround communication complexity from T ^{3/2} to 1. At first, we present the first oneshot and projectionfree decentralized online algorithm for monotone continuous DRsubmodular maximization named Oneshot Decentralized MetaFrankWolfe, which achieves a (11/e)regret bound of O(T ^{4/5}). Next, inspired by the nonoblivious boosting function we propose in the first point, we propose the Decentralized Online Boosting Gradient Ascent algorithm, which attains a (11/e)regret of O(T ^{1/2}). As far as we know, this is the first result to obtain the optimal regret against a (11/e)approximation with only one gradient inquiry for each local objective function per step.
1. Can we boost the Projected Gradient Ascent(PGA) methods to achieve the optimal (11/e) approximation ratio for monotone continuous DRsubmodular maximization problems? As far as we know, the standard PGA only can attain a suboptimal (1/2)approximation ratio for monotone continuous DRsubmodular maximization.
2.Whether it is possible to design efficient algorithms for online continuous DRsubmodular maximization? Previous studies about online DRsubmodular maximization highly rely on some complicated tricks, such as meta action and block procedure, which will incur a large number of timeconsuming gradient evaluations in each round. Furthermore, when considering the nonmonotone continuous DRsubmodular objectives, the existing online algorithm requires discretizing the original constrained domain and lifting the ndimensional subroutine problem into a solvable linear programming in a higher (M*n)dimensional space which will bring a heavy computation burden when time horizon is large.
3. Can we design communicationefficient algorithms for the decentralized online continuous DR submodular maximization problem which guarantee the tight (1 − 1/e) approximation ratio? The existing methods about decentralized online continuous DRsubmodular maximization need to inquire a significant number of stochastic gradient estimates for each local objective function and then passes these gradient messages over the network one by one, which will result in a large computation and communication overhead.
The main contribution of this thesis are threefold:
Firstly, we design a novel nonoblivious function from a factorrevealing optimization problem, whose any stationary point provides an optimal (11/e)approximation to the global maximum of the original DRsubmodular objective function. Under the offline scenario, we utilize this non oblivious function to devise a boosting gradient ascent method achieving (11/e) approximation, which improves the (1/2)approximation ratio of the classical gradient ascent algorithm.
Secondly, we present MetaMeasuredFrankWolfe algorithm with a 1/eregret of O(T ^{1/2}) at the cost of T ^{3/2} stochastic gradient evaluations per round, which is the first algorithm to obtain 1/e regret of O(T ^{1/2}) for the online nonmonotone continuous DRsubmodular maximization problem over a downclosed convex set. Moreover, in sharp contrast with previous algorithm, MetaMeasuredFrankWolfe algorithm only depends on the simple online linear oracle without discretization, lifting, or rounding operations. Then, we propose the MonoMeasuredFrankWolfe algorithm, which reduces the perfunction stochastic gradient evaluations from T ^{3/2} to 1 and achieves a 1/eregret bound of O(T ^{4/5}). Next, we extend MonoMeasuredFrankWolfe to the bandit setting and propose the BanditMeasuredFrankWolfe algorithm which attains a 1/eregret bound of O(T ^{8/9}). To the best of our knowledge, these two online algorithms are the first sub linear regret algorithms to explore the oneshot and bandit setting for online nonmonotone continuous DRsubmodular maximization problem over a downclosed convex set.
Lastly, we propose two communicationefficient decentralized online algorithms for the monotone continuous DRsubmodular maximization problem, both of which reduce the number of per function gradient evaluations and perround communication complexity from T ^{3/2} to 1. At first, we present the first oneshot and projectionfree decentralized online algorithm for monotone continuous DRsubmodular maximization named Oneshot Decentralized MetaFrankWolfe, which achieves a (11/e)regret bound of O(T ^{4/5}). Next, inspired by the nonoblivious boosting function we propose in the first point, we propose the Decentralized Online Boosting Gradient Ascent algorithm, which attains a (11/e)regret of O(T ^{1/2}). As far as we know, this is the first result to obtain the optimal regret against a (11/e)approximation with only one gradient inquiry for each local objective function per step.