Efficient Low-Rank Matrix Factorization based on ℓ1,ε-norm for Online Background Subtraction

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Number of pages5
Journal / PublicationIEEE Transactions on Circuits and Systems for Video Technology
Publication statusOnline published - 19 Nov 2021

Abstract

Background subtraction refers to extracting the foreground from an observed video, and is the fundamental problem of various applications. There are two kinds of popular methods to deal with background separation, namely, robust principal component analysis (RPCA) and low-rank matrix factorization (LRMF). Nevertheless, the drawback of RPCA requires tuning penalty parameter to attain an ideal result. Compared with RPCA, the ℓ1-norm based LRMF does not involve extra parameters tuning, but it is challenging to optimize the ℓ1-norm based minimization because of the nonsmooth ℓ1-norm. In addition, it becomes time-consuming to find the optimal solution. In this work, we propose to employ smooth ℓ1,ε-norm, an approximation of ℓ1-norm, to tackle background subtraction. Thus, the proposed model inherits the superiority of LRMF and even becomes tractable. Then the resultant optimization problem is solved by alternating minimization and gradient descent where the step-size of the gradient descent is adaptively updated via backtracking line searching approach. The proposed method is proved to be locally convergent. Experimental results on synthetic and real-world data demonstrate that our method outperforms the state-of-the-art algorithms in terms of reconstruction loss, computational speed and hardware performance.

Research Area(s)

  • Background subtraction, online subspace learning, low-rank matrix factorization