L_{1} and L_{0} Minimization Algorithms and Their Applications in Signal Processing
L1和L0最小化算法及其在信號處理中的應用
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution  

Supervisors/Advisors 

Award date  29 Oct 2018 
Link(s)
Permanent Link  https://scholars.cityu.edu.hk/en/theses/theses(6b30228eb00f4678bc98ed24d9444dec).html 

Other link(s)  Links 
Abstract
L_{1} and L_{0} minimization algorithms are widely used in signal processing. Such techniques can be applied to solve many signal compression and signal approximation problems. They are also effective in noise reduction and parameter selection. In this thesis, several novel L_{1} and L_{0} minimization algorithms are developed. Besides, the applications of these proposed algorithms are studied.
The aim of signal recovery in compressed sensing is to estimate a sparse signal according to the measurement matrix and an observation vector. It can be formulated as an L_{0} minimization problem. However, due to the nature of L_{0}norm, this problem is NPhard. Hence approximate functions of L_{0}norm are needed to relax the original problem. Among them, the L_{1}norm is the best convex surrogate. Up to now, a lot of works have been done in L_{1} minimization, and many offtheshelf algorithms have been proposed. However, most of them cannot offer realtime solutions. To some extent, this shortcoming limits their application prospects. To address this issue, the Lagrange programming neural network (LPNN) is used here. Since LPNN requires that all its objective function and constraints are differentiable, the concept of local competitive algorithm (LCA) is introduced to handle the nondifferentiable L_{1}norm term. This method is named as LPNNLCAL_{1} which has locally asymptotic convergence. Besides, inspired by the projection neural network (PNN), for basis pursuit (BP) problem, the dynamics of the LPNNLCAL_{1} can be further modified. Thus, we can prove that the modified method has global asymptotical stability, and its convergence speed is accelerated. This improved method is called modified LPNNLCAL_{1} in this thesis.
However, compared with the L_{1}norm, nonconvex functions can better approximate the L_{0}norm and can reconstruct the signal from fewer observations. Hence, this thesis proposes another analog neural network for sparse signal recovery in this case. It is also based on LPNNLCA framework, but uses an L_{0}normlike function rather than the L_{1}norm. It is named as LPNNLCAL_{0}. We prove that an equilibrium point of its dynamics is locally asymptotically stable and it is a local optimal solution.
Next, several other applications of the L_{1} and L_{0} minimization algorithms are explored. First, an ellipse fitting problem is investigated. Given a set of 2dimensional (2D) scattering points which are usually obtained from the edge detection process, the aim of ellipse fitting is to construct an elliptic equation that best fits the collected observations. However, some of the scattering 2D points may contain outliers due to imperfect edge detection. To address this issue, the fitting task is formulated as a constrained optimization problem in which the objective function is either an L_{1}norm or L_{0}norm term. Thus, the LPNNLCA framework can be used to handle it.
The target localization in the timeofarrival (TOA) model is another practical problem. Due to nonlineofsight (NLOS) propagation and some perturbations, TOA measurements may contain outliers. Hence, to reduce the influence of these outliers, this problem is also formulated as a constrained optimization problem in which the objective function is an L_{1}norm term. This problem is solved by the approximate L_{1}norm LPNN method and LPNNLCAL_{1}.
Finally, under concurrent faults, the training and center selection problem of radial basis function (RBF) neural networks is studied. The concurrent faults include the multiplicative weight noise and the open weight fault. This problem can be formulated as an optimization problem with an L_{0}norm term. We propose a novel approach to handle it. In this method, the minimax concave penalty (MCP) is used to replace the L_{0}norm term, and the alternating direction method of multipliers (ADMM) is used to solve the problem.