Algorithms for sparse optimization : complexity and applications
稀疏優化演算法 : 複雜度與應用
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 16 Feb 2015 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(97d9d769-f180-45dc-a8be-081bd19ab118).html |
---|---|
Other link(s) | Links |
Abstract
Exploiting the sparsity of an underlying problem is an important research topic that
has extensive applications in statistics, machine learning, signal processing, compressed
sensing and so on. One method by which to exploit sparsity is to add a
sparse regularizer, such as L0 norm, into the original model. However, such sparse
models are often limited by high computational cost attributed to the combinatorial
property of the sparse regularizer. This property limits the application of sparse
models to small-sized problems. The recently developed compressed sensing theory
shows that the convex relaxation of some sparse models has the same solution as
that of the original model under certain conditions. This situation enables the design
of efficient algorithms to solve large-scale sparse problems. In the same vein, a
class of non-convex but continuous sparse models has been proposed. Non-convex
sparse models not only possess a number of desirable properties relative to variable
selection, but are also able to obtain sparse solutions under weaker conditions than
the convex relaxation. In this dissertation, we systematically study on a class of
non-convex continuous sparse models and establish efficient algorithms to solve the
related problems.
In Chapter 2, we discuss the properties of a class of non-convex continuous sparse
models (L2 − Lp model) in terms of optimization. In particular, we present the
characteristics of the optimal solutions, discuss the relationships between solution
sparsity and model parameters, as well as examine the connections among several
sparse models. These results provide a foundation for subsequently developing efficient
algorithms. We have to point out that several results from other researchers are
also included for completeness of our discussion in Chapter 2.
Chapter 3 presents a fast first-order iterative algorithm for solving linear constrained
non-convex minimization problems. We show that this algorithm can obtain
an E-KKT point within O(ϵ−2) iterations. This algorithm is extended to solve the
L2 − Lp minimization problem with the smoothing and lower bound cut techniques
to deal with non-Lipschitz hardness of the objective function. The complexity result
still holds for the linear constrained L2 − Lp minimization problem. We like-wise use
several techniques, such as noise debiasing and adaptive mechanism for parameter selection
to improve algorithm speed and solution quality. Numerical results on several
applications demonstrate the efficiency of the proposed algorithms.
In Chapter 4, we subsequently propose a breakthrough linear convergence iterative
algorithm to find an ϵ-KKT solution for the linear constrained L2 − Lp minimization
problem under mild conditions. By analyzing the special structure of the objective
function, we use several interesting techniques to separate the convex, concave and
non-Lipschitz continuous situations, and handle them differently. Our analysis shows
that the proposed algorithm has a linear convergence rate for obtaining an ϵ-KKT
solution, thus demonstrating a significant improvement over the current best result
O(ϵ-3/2).
Lastly, we discuss future research directions.
- Mathematical optimization, Sparse matrices.