L1 and L0 Minimization Algorithms and Their Applications in Signal Processing
Student thesis: Doctoral Thesis
Related Research Unit(s)
L1 and L0 minimization algorithms are widely used in signal processing. Such techniques can be applied to solve many signal compression and signal approximation problems. They are also effective in noise reduction and parameter selection. In this thesis, several novel L1 and L0 minimization algorithms are developed. Besides, the applications of these proposed algorithms are studied.
The aim of signal recovery in compressed sensing is to estimate a sparse signal according to the measurement matrix and an observation vector. It can be formulated as an L0 minimization problem. However, due to the nature of L0-norm, this problem is NP-hard. Hence approximate functions of L0-norm are needed to relax the original problem. Among them, the L1-norm is the best convex surrogate. Up to now, a lot of works have been done in L1 minimization, and many off-the-shelf algorithms have been proposed. However, most of them cannot offer real-time solutions. To some extent, this shortcoming limits their application prospects. To address this issue, the Lagrange programming neural network (LPNN) is used here. Since LPNN requires that all its objective function and constraints are differentiable, the concept of local competitive algorithm (LCA) is introduced to handle the non-differentiable L1-norm term. This method is named as LPNN-LCA-L1 which has locally asymptotic convergence. Besides, inspired by the projection neural network (PNN), for basis pursuit (BP) problem, the dynamics of the LPNN-LCA-L1 can be further modified. Thus, we can prove that the modified method has global asymptotical stability, and its convergence speed is accelerated. This improved method is called modified LPNN-LCA-L1 in this thesis.
However, compared with the L1-norm, nonconvex functions can better approximate the L0-norm and can reconstruct the signal from fewer observations. Hence, this thesis proposes another analog neural network for sparse signal recovery in this case. It is also based on LPNN-LCA framework, but uses an L0-norm-like function rather than the L1-norm. It is named as LPNN-LCA-L0. We prove that an equilibrium point of its dynamics is locally asymptotically stable and it is a local optimal solution.
Next, several other applications of the L1 and L0 minimization algorithms are explored. First, an ellipse fitting problem is investigated. Given a set of 2-dimensional (2-D) scattering points which are usually obtained from the edge detection process, the aim of ellipse fitting is to construct an elliptic equation that best fits the collected observations. However, some of the scattering 2-D points may contain outliers due to imperfect edge detection. To address this issue, the fitting task is formulated as a constrained optimization problem in which the objective function is either an L1-norm or L0-norm term. Thus, the LPNN-LCA framework can be used to handle it.
The target localization in the time-of-arrival (TOA) model is another practical problem. Due to non-line-of-sight (NLOS) propagation and some perturbations, TOA measurements may contain outliers. Hence, to reduce the influence of these outliers, this problem is also formulated as a constrained optimization problem in which the objective function is an L1-norm term. This problem is solved by the approximate L1-norm LPNN method and LPNN-LCA-L1.
Finally, under concurrent faults, the training and center selection problem of radial basis function (RBF) neural networks is studied. The concurrent faults include the multiplicative weight noise and the open weight fault. This problem can be formulated as an optimization problem with an L0-norm term. We propose a novel approach to handle it. In this method, the minimax concave penalty (MCP) is used to replace the L0-norm term, and the alternating direction method of multipliers (ADMM) is used to solve the problem.