Detection of foreground in dynamic scene via two-step background subtraction

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

6 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)723-740
Journal / PublicationMachine Vision and Applications
Volume26
Issue number6
Online published3 Jul 2015
Publication statusPublished - Aug 2015

Abstract

Various computer vision applications such as video surveillance and gait analysis have to perform human detection. This is usually done via background modeling and subtraction. It is a challenging problem when the image sequence captures the human activities in a dynamic scene. This paper presents a method for foreground detection via a two-step background subtraction. Background frame is first generated from the initial image frames of the image sequence and continuously updated based on the background subtraction results. The background is modeled as non-overlapping blocks of background frame pixel colors. In the first step of background subtraction, the current image frame is compared with the background model via a similarity measure. The potential foregrounds are separated from the static background and most of the dynamic background pixels. In the second step, if a potential foreground is sufficiently large, the enclosing region is compared with the background model again to obtain a refined shape of the foreground. We compare our method with various existing background subtraction methods using image sequences containing dynamic background elements such as trees and water. We show through the quantitative measures the superiority of our method.

Research Area(s)

  • Background subtraction, Dynamic scene, Dynamic texture, Foreground detection, Video surveillance