Saliency detection with moving camera via background model completion
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Article number | 8374 |
Journal / Publication | Sensors |
Volume | 21 |
Issue number | 24 |
Online published | 15 Dec 2021 |
Publication status | Published - Dec 2021 |
Link(s)
DOI | DOI |
---|---|
Attachment(s) | Documents
Publisher's Copyright Statement
|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85121035891&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(588bfa52-49e0-4967-b15e-bd386f1d6b16).html |
Abstract
Detecting saliency in videos is a fundamental step in many computer vision systems. Sa-liency is the significant target(s) in the video. The object of interest is further analyzed for high‐level applications. The segregation of saliency and the background can be made if they exhibit different visual cues. Therefore, saliency detection is often formulated as background subtraction. However, saliency detection is challenging. For instance, dynamic background can result in false positive er-rors. In another scenario, camouflage will result in false negative errors. With moving cameras, the captured scenes are even more complicated to handle. We propose a new framework, called saliency detection via background model completion (SD‐BMC), that comprises a background modeler and a deep learning background/foreground segmentation network. The background modeler generates an initial clean background image from a short image sequence. Based on the idea of video comple-tion, a good background frame can be synthesized with the co‐existence of changing background and moving objects. We adopt the background/foreground segmenter, which was pre‐trained with a specific video dataset. It can also detect saliency in unseen videos. The background modeler can adjust the background image dynamically when the background/foreground segmenter output de-teriorates during processing a long video. To the best of our knowledge, our framework is the first one to adopt video completion for background modeling and saliency detection in videos captured by moving cameras. The F‐measure results, obtained from the pan‐tilt‐zoom (PTZ) videos, show that our proposed framework outperforms some deep learning‐based background subtraction models by 11% or more. With more challenging videos, our framework also outperforms many high‐ranking background subtraction methods by more than 3%.
Research Area(s)
- Background modeling, Background subtraction, Foreground segmentation, Mobile camera, PTZ camera, Saliency de-tection
Citation Format(s)
Saliency detection with moving camera via background model completion. / Zhang, Yu-Pei; Chan, Kwok-Leung.
In: Sensors, Vol. 21, No. 24, 8374, 12.2021.
In: Sensors, Vol. 21, No. 24, 8374, 12.2021.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Download Statistics
No data available