TY - JOUR
T1 - Saliency detection in video sequences using perceivable change encoded local pattern
AU - Chan, K. L.
PY - 2018/7
Y1 - 2018/7
N2 - The detection of salient objects in video sequence is an active computer vision research topic. One approach is to perform joint segmentation of objects and background. The background scene is learned and modeled. A pixel is classified as salient if its features do not match with the background model. The segmentation process faces many difficulties when the video sequence is captured under various dynamic circumstances. To tackle these challenges, we propose a novel local ternary pattern for background modeling. The features derived from the local pattern are robust to random noise, scale transform of intensity and rotational transform. We also propose a novel scheme for matching a pixel with the background model within a spatiotemporal domain. Furthermore, we devise two feedback mechanisms for maintaining the quality of the result over a long video. First, the background model is updated immediately based on the background subtraction result. Second, the detected object is enhanced by adjustment of the segmentation conditions in proximity via a propagation scheme. We compare our method with state-of-the-art background subtraction algorithms using various video datasets.
AB - The detection of salient objects in video sequence is an active computer vision research topic. One approach is to perform joint segmentation of objects and background. The background scene is learned and modeled. A pixel is classified as salient if its features do not match with the background model. The segmentation process faces many difficulties when the video sequence is captured under various dynamic circumstances. To tackle these challenges, we propose a novel local ternary pattern for background modeling. The features derived from the local pattern are robust to random noise, scale transform of intensity and rotational transform. We also propose a novel scheme for matching a pixel with the background model within a spatiotemporal domain. Furthermore, we devise two feedback mechanisms for maintaining the quality of the result over a long video. First, the background model is updated immediately based on the background subtraction result. Second, the detected object is enhanced by adjustment of the segmentation conditions in proximity via a propagation scheme. We compare our method with state-of-the-art background subtraction algorithms using various video datasets.
KW - Background modeling
KW - Background subtraction
KW - Local ternary pattern
KW - Saliency detection
UR - http://www.scopus.com/inward/record.url?scp=85040766901&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85040766901&origin=recordpage
U2 - 10.1007/s11760-018-1242-8
DO - 10.1007/s11760-018-1242-8
M3 - RGC 21 - Publication in refereed journal
SN - 1863-1703
VL - 12
SP - 975
EP - 982
JO - Signal, Image and Video Processing
JF - Signal, Image and Video Processing
IS - 5
ER -