Learning Proximal Operator Methods for Nonconvex Sparse Recovery with Theoretical Guarantee

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

6 Scopus Citations
View graph of relations


Related Research Unit(s)


Original languageEnglish
Article number9025003
Pages (from-to)5244-5259
Journal / PublicationIEEE Transactions on Signal Processing
Online published5 Mar 2020
Publication statusPublished - 2020


Sparse recovery has attracted considerable attention in signal processing community these years, because of its widespread usage in many applications. Though lots of convex and nonconvex methods have been proposed for this topic, most of them are computationally expensive. Recently, learned iterative shrinkage-thresholding algorithm (LISTA) and its variants are incorporated to sparse recovery, which are based on specific deep neural networks designed by unfolding ISTA. However, the problem they are able to solve is regularized by l1 norm, which is less effective at promoting sparseness than some nonconvex sparsity-inducing penalties (SIPs). Motivated by the fact that the problems regularized by these SIPs can be solved by proximal operator methods, we devise a network named learning proximal operator method (LePOM) for sparse recovery. For LePOM on a general class of SIPs, a necessary condition of its convergence is established. Based on this condition, a simplified network is proposed with less parameters. Theoretical analysis of this network inspires us to further reduce the number of parameters and arrive at proposing Analytical LePOM (ALePOM). ALePOM determines most parameters by solving an optimization problem and significantly reduces the number of parameters. Theoretical analysis shows if the signal is sufficiently sparse, ALePOM converges linearly. Simulations confirm our analyses and demonstrate that the proposed solutions outperform state-of-the-art sparse recovery algorithms and neural network-based methods.

Research Area(s)

  • neural network, nonconvex sparsity-inducing penalty, proximal operator method, Sparse recovery, unfolding iterative algorithm