Efficient Low-Rank Matrix Factorization based on ℓ1,ε-norm for Online Background Subtraction

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

17 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)4900-4904
Number of pages5
Journal / PublicationIEEE Transactions on Circuits and Systems for Video Technology
Volume32
Issue number7
Online published19 Nov 2021
Publication statusPublished - Jul 2022

Abstract

Background subtraction refers to extracting the foreground from an observed video, and is the fundamental problem of various applications. There are two kinds of popular methods to deal with background separation, namely, robust principal component analysis (RPCA) and low-rank matrix factorization (LRMF). Nevertheless, the drawback of RPCA requires tuning penalty parameter to attain an ideal result. Compared with RPCA, the ℓ1-norm based LRMF does not involve extra parameters tuning, but it is challenging to optimize the ℓ1-norm based minimization because of the nonsmooth ℓ1-norm. In addition, it becomes time-consuming to find the optimal solution. In this work, we propose to employ smooth ℓ1,ε-norm, an approximation of ℓ1-norm, to tackle background subtraction. Thus, the proposed model inherits the superiority of LRMF and even becomes tractable. Then the resultant optimization problem is solved by alternating minimization and gradient descent where the step-size of the gradient descent is adaptively updated via backtracking line searching approach. The proposed method is proved to be locally convergent. Experimental results on synthetic and real-world data demonstrate that our method outperforms the state-of-the-art algorithms in terms of reconstruction loss, computational speed and hardware performance.

Research Area(s)

  • Background subtraction, online subspace learning, low-rank matrix factorization