TY - JOUR
T1 - Detection of foreground in dynamic scene via two-step background subtraction
AU - Chan, K. L.
PY - 2015/8
Y1 - 2015/8
N2 - Various computer vision applications such as video surveillance and gait analysis have to perform human detection. This is usually done via background modeling and subtraction. It is a challenging problem when the image sequence captures the human activities in a dynamic scene. This paper presents a method for foreground detection via a two-step background subtraction. Background frame is first generated from the initial image frames of the image sequence and continuously updated based on the background subtraction results. The background is modeled as non-overlapping blocks of background frame pixel colors. In the first step of background subtraction, the current image frame is compared with the background model via a similarity measure. The potential foregrounds are separated from the static background and most of the dynamic background pixels. In the second step, if a potential foreground is sufficiently large, the enclosing region is compared with the background model again to obtain a refined shape of the foreground. We compare our method with various existing background subtraction methods using image sequences containing dynamic background elements such as trees and water. We show through the quantitative measures the superiority of our method.
AB - Various computer vision applications such as video surveillance and gait analysis have to perform human detection. This is usually done via background modeling and subtraction. It is a challenging problem when the image sequence captures the human activities in a dynamic scene. This paper presents a method for foreground detection via a two-step background subtraction. Background frame is first generated from the initial image frames of the image sequence and continuously updated based on the background subtraction results. The background is modeled as non-overlapping blocks of background frame pixel colors. In the first step of background subtraction, the current image frame is compared with the background model via a similarity measure. The potential foregrounds are separated from the static background and most of the dynamic background pixels. In the second step, if a potential foreground is sufficiently large, the enclosing region is compared with the background model again to obtain a refined shape of the foreground. We compare our method with various existing background subtraction methods using image sequences containing dynamic background elements such as trees and water. We show through the quantitative measures the superiority of our method.
KW - Background subtraction
KW - Dynamic scene
KW - Dynamic texture
KW - Foreground detection
KW - Video surveillance
UR - http://www.scopus.com/inward/record.url?scp=84937816126&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-84937816126&origin=recordpage
U2 - 10.1007/s00138-015-0696-8
DO - 10.1007/s00138-015-0696-8
M3 - RGC 21 - Publication in refereed journal
SN - 0932-8092
VL - 26
SP - 723
EP - 740
JO - Machine Vision and Applications
JF - Machine Vision and Applications
IS - 6
ER -