Spatiotemporal background subtraction using minimum spanning tree and optical flow
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Computer Vision, ECCV 2014 |
Subtitle of host publication | 13th European Conference, Proceedings |
Publisher | Springer Verlag |
Pages | 521-534 |
Volume | 8695 LNCS |
Edition | PART 7 |
ISBN (print) | 9783319105833 |
Publication status | Published - 2014 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 8695 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (electronic) | 1611-3349 |
Conference
Title | 13th European Conference on Computer Vision, ECCV 2014 |
---|---|
Place | Switzerland |
City | Zurich |
Period | 6 - 12 September 2014 |
Link(s)
Abstract
Background modeling and subtraction is a fundamental research topic in computer vision. Pixel-level background model uses a Gaussian mixture model (GMM) or kernel density estimation to represent the distribution of each pixel value. Each pixel will be process independently and thus is very efficient. However, it is not robust to noise due to sudden illumination changes. Region-based background model uses local texture information around a pixel to suppress the noise but is vulnerable to periodic changes of pixel values and is relatively slow. A straightforward combination of the two cannot maintain the advantages of the two. This paper proposes a real-time integration based on robust estimator. Recent efficient minimum spanning tree based aggregation technique is used to enable robust estimators like M-smoother to run in real time and effectively suppress the noisy background estimates obtained from Gaussian mixture models. The refined background estimates are then used to update the Gaussian mixture models at each pixel location. Additionally, optical flow estimation can be used to track the foreground pixels and integrated with a temporal M-smoother to ensure temporally-consistent background subtraction. The experimental results are evaluated on both synthetic and real-world benchmarks, showing that our algorithm is the top performer. © 2014 Springer International Publishing.
Research Area(s)
- Background Modeling, Optical Flow, Tracking, Video Segmentation
Citation Format(s)
Spatiotemporal background subtraction using minimum spanning tree and optical flow. / Chen, Mingliang; Yang, Qingxiong; Li, Qing et al.
Computer Vision, ECCV 2014: 13th European Conference, Proceedings. Vol. 8695 LNCS PART 7. ed. Springer Verlag, 2014. p. 521-534 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 8695 LNCS).
Computer Vision, ECCV 2014: 13th European Conference, Proceedings. Vol. 8695 LNCS PART 7. ed. Springer Verlag, 2014. p. 521-534 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 8695 LNCS).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review