Advanced Low-Rank Matrix/Tensor Recovery: Pursuit of Optimality, Unbiasedness, Robustness and Sparsity
先進的低秩矩陣/張量恢復:追求最優,無偏,魯棒及稀疏
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 18 Aug 2023 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(0783c97f-f89f-42b7-ba58-33c11c08ebfd).html |
---|---|
Other link(s) | Links |
Abstract
Low-rank matrix/tensor recovery refers to restoring a matrix/tensor from its degraded observations which may involve missing entries and/or noise disturbances, in accordance to the low-rank property. It has been widely utilized in numerous applications, including video reconstruction, image inpainting, hyperspectral/multisepctral image recovery, and radar data analysis, because many real-world signals possess an approximate low-rank structure even though they lie in a high-dimensional space. The aim of this thesis is to devise advanced algorithms for low-rank matrix/tensor restoration with the pursuit of optimality, unbiasedness, robustness and sparsity. In addition, theoretical metrics including computational complexity and convergence analyses, for the developed methods are provided. Extensive numerical examples using synthetic data, real-world photos, videos, human faces, and/or multispectral images, show the superiority of the devised algorithms over the competitors in terms of recovery accuracy and computational time.
We solve low-rank matrix recovery (LRMR) via three schemes: low-rank factorization, successive rank-one matrix approximation, and rank minimization. We first tackle factorization based matrix recovery in the presence of outliers. The truncated-quadratic function is adopted to combat large noise interferences because it only down-weighs outlier-contaminated entries and truncates the magnitude of gross errors as a small constant. In contrast with the Huber function, the truncated-quadratic function can handle outliers with large magnitudes. As the loss function is nonsmooth, half-quadratic optimization is exploited to convert it into two equivalent formulations. Based on the two forms, we propose two efficient techniques with convergence guarantees for outlier-robustness. Each subproblem is convex and its optimal solution is derived. Nevertheless, the weight function of the truncated-quadratic function is discontinuous and is sensitive even to a small change. To fix this issue, we design a novel continuously differentiable loss function called hybrid ordinary-Welsch (HOW), which only penalizes outlier-corrupted observations, but its weight function is continuous. Since HOW is nonconvex, the Legendre-Fenchel transform is utilized and a sparsity-inducing regularizer (SIR) is produced. In addition, although the SIR is nonconvex, the corresponding subproblem is convex, and thus the optimal solution is obtained. It is proved that when addressing the penalized least squares problem, the SIR can yield a sparse and approximately unbiased solution. Furthermore, we use HOW as a penalty function for robust LRMR, and scaled alternating steepest descent is employed to find the low-rank component.
Since factorization based approaches require knowing the matrix rank, which may be unavailable in real-life scenarios, the successive rank-one approximation approach is then investigated. Based on the conventional rank-one matrix completion model, we adopt the correntropy function to resist noise disturbances, and propose a robust rank-one matrix completion algorithm. Nevertheless, it is revealed that the basis matrix in the traditional schemes will not be further updated once they have been determined. We then develop a new rank-one matrix recovery method from the vector perspective, and prove that the devised algorithm gives a smaller recovery error than the conventional rank-one recovery techniques.
On the other hand, we present a new nonconvex SIR associated with HOW. As the SIR is able to yield a sparse solution, we apply it for robust LRMR via enhancing the sparsity of singular values and penalizing sparse outliers. An efficient algorithm using the alternating direction method of multipliers (ADMM) is suggested. Although the SIR is not convex, we show each subproblem relevant to the SIR is convex, and produce the optimal solution with closed-form expression. Moreover, the sequences generated by our algorithm are bounded and any accumulation point is a stationary point.
Finally, we provide a framework to generate SIRs along with theoretical properties. Applying the lp-norm, Cauchy and Welsch functions to our framework produces the corresponding SIRs. Compared with the lp-norm with 0<p<1 and the Laplace function, the generated SIRs have closed-form thresholding operators. We then utilize these SIRs as nonconvex tensor rank surrogates, and define three tensor norms. We provide a generalized tensor singular value thresholding operator to solve the tensor norm minimization problem. Furthermore, the tensor norms and the corresponding SIRs are used to capture the low-rank structure of a tensor and penalize outliers, respectively, for robust low-rank tensor recovery (LRTR). Based on the ADMM, efficient algorithms with convergence guarantees are developed. It is worth pointing out that our robust LRTR methods are suitable for LRTR in the absence of gross errors.
We solve low-rank matrix recovery (LRMR) via three schemes: low-rank factorization, successive rank-one matrix approximation, and rank minimization. We first tackle factorization based matrix recovery in the presence of outliers. The truncated-quadratic function is adopted to combat large noise interferences because it only down-weighs outlier-contaminated entries and truncates the magnitude of gross errors as a small constant. In contrast with the Huber function, the truncated-quadratic function can handle outliers with large magnitudes. As the loss function is nonsmooth, half-quadratic optimization is exploited to convert it into two equivalent formulations. Based on the two forms, we propose two efficient techniques with convergence guarantees for outlier-robustness. Each subproblem is convex and its optimal solution is derived. Nevertheless, the weight function of the truncated-quadratic function is discontinuous and is sensitive even to a small change. To fix this issue, we design a novel continuously differentiable loss function called hybrid ordinary-Welsch (HOW), which only penalizes outlier-corrupted observations, but its weight function is continuous. Since HOW is nonconvex, the Legendre-Fenchel transform is utilized and a sparsity-inducing regularizer (SIR) is produced. In addition, although the SIR is nonconvex, the corresponding subproblem is convex, and thus the optimal solution is obtained. It is proved that when addressing the penalized least squares problem, the SIR can yield a sparse and approximately unbiased solution. Furthermore, we use HOW as a penalty function for robust LRMR, and scaled alternating steepest descent is employed to find the low-rank component.
Since factorization based approaches require knowing the matrix rank, which may be unavailable in real-life scenarios, the successive rank-one approximation approach is then investigated. Based on the conventional rank-one matrix completion model, we adopt the correntropy function to resist noise disturbances, and propose a robust rank-one matrix completion algorithm. Nevertheless, it is revealed that the basis matrix in the traditional schemes will not be further updated once they have been determined. We then develop a new rank-one matrix recovery method from the vector perspective, and prove that the devised algorithm gives a smaller recovery error than the conventional rank-one recovery techniques.
On the other hand, we present a new nonconvex SIR associated with HOW. As the SIR is able to yield a sparse solution, we apply it for robust LRMR via enhancing the sparsity of singular values and penalizing sparse outliers. An efficient algorithm using the alternating direction method of multipliers (ADMM) is suggested. Although the SIR is not convex, we show each subproblem relevant to the SIR is convex, and produce the optimal solution with closed-form expression. Moreover, the sequences generated by our algorithm are bounded and any accumulation point is a stationary point.
Finally, we provide a framework to generate SIRs along with theoretical properties. Applying the lp-norm, Cauchy and Welsch functions to our framework produces the corresponding SIRs. Compared with the lp-norm with 0<p<1 and the Laplace function, the generated SIRs have closed-form thresholding operators. We then utilize these SIRs as nonconvex tensor rank surrogates, and define three tensor norms. We provide a generalized tensor singular value thresholding operator to solve the tensor norm minimization problem. Furthermore, the tensor norms and the corresponding SIRs are used to capture the low-rank structure of a tensor and penalize outliers, respectively, for robust low-rank tensor recovery (LRTR). Based on the ADMM, efficient algorithms with convergence guarantees are developed. It is worth pointing out that our robust LRTR methods are suitable for LRTR in the absence of gross errors.