Abstract
Latent factor analysis (LFA) aims to uncover hidden structures within observed signals, based on the principle that high-dimensional data can be effectively represented by a small number of latent factors. Many widely used high-dimensional signals exhibit simple underlying structures, such as sparsity (under certain bases) or low rank properties, which serve as valuable constraints and regularization techniques for signal restoration. In this thesis, we address the problem of incomplete and distorted signal restoration using LFA, incorporating sparsity and low rank regularizations for the signals or outliers. Since low rank implies a small number of nonzero singular values, low-rankness and sparsity are inherently related. Consequently, the optimization problems we formulate primarily seek sparse solutions. In this thesis, we also explore multiple strategies for handling sparsity-related regularization across different scenarios.Sparsity is typically measured by the ℓ0-norm, which counts the nonzero entry number. The rank of a matrix is equivalent to the ℓ0-norm of its singular value vector (SVV). It is known that directly minimizing ℓ0-norm and rank is NP-hard. Therefore, we first handle rank minimization by optimizing its substitutes. One well-known rank surrogate is the nuclear norm, which computes the ℓ1-norm of matrix SVV. It is apparent that the approximation gap between nuclear norm and rank function is significant for large singular values. To mitigate this issue, we propose the truncated quadratic norm which is based on the truncated quadratic function. This function maps large singular values as one, which faithfully follows rank function. For small singular values, it performs square operation and is easy to optimize. This norm is employed in matrix completion, a task that involves recovering missing entries of a matrix with only partial observations. Low rank assumption is usually adopted, which seeks a complete matrix with just a few nonzero singular values. We impose the truncated quadratic norm on two factor matrices instead of the original one, resulting in reduced computational complexity for singular value decomposition. The optimization is carried out using the proximal linear method, and we analyze its convergence behavior. Experimental results on synthetic data and grayscale images demonstrate the superior performance of our approach over competing methods.
Next, another rank surrogate, logarithmic norm, is employed to tackle structured interference in signals. Besides, although directly minimizing ℓ0-norm is difficult, solving ℓ0-norm regularized least squares (LS) problem is feasible, where the solution is produced by hard-thresholding operator. With such strategies, a robust sparse representation (SR) algorithm is developed. As the outliers can be correlated or independent, we decompose the SR fitting error into a low-rank component and a sparse part, corresponding to the correlated and independent anomalies, respectively. This fitting error decomposition scheme is applied for face image classification, and group sparsity for the coefficient is further adopted. Logarithmic norm, ℓ0-norm, and ℓ2,0-norm are imposed on the low-rank interference, sparse outlier, and representation coefficient, respectively. We use alternating direction method of multipliers to optimize the objective function. The ℓ0-norm minimization subproblems in the form of regularized LS are solved by adaptive hard-thresholding, where the threshold is calculated based on median absolute deviation (MAD). The proposed algorithm realizes high face recognition rates under varying illuminations, expressions, impulsive noise models, and masks.
Since ℓ0-norm combined with LS fidelity term is tractable, we can also deal with rank minimization by solving rank function regularized LS problem. We verify the effectiveness of this idea on robust direction-of-arrival (DOA) estimation algorithm in the presence of distorted sensors. In a perfect uniform linear array, the noise-free array observation is a low-rank matrix. However, sensor distortion introduces gain-phase uncertainty in the received signal model, and the low-rankness for array observation does not hold. It is proper to assume that the distorted sensors are sparsely distributed in the array because sensor defect rate is usually small. Considering the gain-phase errors, the observation is contaminated by row-sparse outliers. Under the framework of low-rank and row-sparse matrix decomposition, robust DOA estimation with distorted sensors is achieved. Rank function and ℓ2,0-norm are employed for low rank and row-sparsity regularizations respectively to avoid approximation gap. The objective function is minimized by proximal block coordinate descent. The rank function and ℓ2,0-norm minimization reduce to ℓ0-norm minimization, which is solved via hard-thresholding. The threshold is adaptively determined during iterations based on the shifted MAD. This approach also enables source enumeration and distorted sensor detection. Additionally, it is found that the low-rank observation and row-sparse errors are related by the sparse sensor gain-phase error vector. Thus, to leverage this relationship, we implement optimization with regard to the low-rank matrix and sparse sensor error vector, which are regularized by rank function and ℓ0-norm, respectively. Block proximal linear method is exploited to handle the objective minimization. Similarly, hard-thresholding operation provides the solution to the resultant ℓ0-norm minimization problem. Here, we try another adaptive threshold determination method based on scaled third quartile. The above two algorithms outperform the benchmark schemes in DOA estimation, source enumeration, and distorted sensor detection. Among them, the second approach yields better performance.
| Date of Award | 20 Aug 2025 |
|---|---|
| Original language | English |
| Awarding Institution |
|
| Supervisor | Hing Cheung SO (Supervisor) |