Abstract
The detection of salient objects in video sequence is an active research area of computer vision. One approach is to perform joint segmentation of objects and background in each image frame of the video. The background scene is learned and modeled. Each pixel is classified as background if it matches the background model. Otherwise the pixel belongs to a salient object. The segregation method
faces many difficulties when the video sequence is captured under various dynamic circumstances. To tackle these challenges, we propose a novel perception-based local ternary pattern for background modeling. The local
pattern is fast to compute and is insensitive to random noise, scale transform of intensity. The pattern feature is also invariant to rotational transform. We also propose a novel scheme for matching a pixel with the background model within a spatio-temporal domain. Furthermore, we devise two feedback mechanisms for maintaining the quality of the result over a long video. First, the background
model is updated immediately based on the background subtraction result. Second, the detected object is enhanced by adjustment of the segmentation
conditions in proximity via papropagation scheme. We compare our method with state-of-the-art background/foreground segregation algorithms using various
video datasets.
faces many difficulties when the video sequence is captured under various dynamic circumstances. To tackle these challenges, we propose a novel perception-based local ternary pattern for background modeling. The local
pattern is fast to compute and is insensitive to random noise, scale transform of intensity. The pattern feature is also invariant to rotational transform. We also propose a novel scheme for matching a pixel with the background model within a spatio-temporal domain. Furthermore, we devise two feedback mechanisms for maintaining the quality of the result over a long video. First, the background
model is updated immediately based on the background subtraction result. Second, the detected object is enhanced by adjustment of the segmentation
conditions in proximity via papropagation scheme. We compare our method with state-of-the-art background/foreground segregation algorithms using various
video datasets.
Original language | English |
---|---|
Title of host publication | Proceedings of the 15th IAPR International Conference on Machine Vision Applications, MVA 2017 |
Publisher | IEEE |
Pages | 510-513 |
ISBN (Print) | 9784901122160 |
DOIs | |
Publication status | Published - May 2017 |
Event | 2017 Fifteenth International Conference on Machine Vision Applications (MVA) - Nagoya University, Nagoya, Japan, Nagoya, Japan Duration: 8 May 2017 → 12 May 2017 http://www.mva-org.jp/mva2017/ |
Conference
Conference | 2017 Fifteenth International Conference on Machine Vision Applications (MVA) |
---|---|
Country/Territory | Japan |
City | Nagoya, |
Period | 8/05/17 → 12/05/17 |
Internet address |