Saliency detection with moving camera via background model completion

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article number8374
Journal / PublicationSensors
Volume21
Issue number24
Online published15 Dec 2021
Publication statusPublished - Dec 2021

Link(s)

Abstract

Detecting saliency in videos is a fundamental step in many computer vision systems. Sa-liency is the significant target(s) in the video. The object of interest is further analyzed for high‐level applications. The segregation of saliency and the background can be made if they exhibit different visual cues. Therefore, saliency detection is often formulated as background subtraction. However, saliency detection is challenging. For instance, dynamic background can result in false positive er-rors. In another scenario, camouflage will result in false negative errors. With moving cameras, the captured scenes are even more complicated to handle. We propose a new framework, called saliency detection via background model completion (SD‐BMC), that comprises a background modeler and a deep learning background/foreground segmentation network. The background modeler generates an initial clean background image from a short image sequence. Based on the idea of video comple-tion, a good background frame can be synthesized with the co‐existence of changing background and moving objects. We adopt the background/foreground segmenter, which was pre‐trained with a specific video dataset. It can also detect saliency in unseen videos. The background modeler can adjust the background image dynamically when the background/foreground segmenter output de-teriorates during processing a long video. To the best of our knowledge, our framework is the first one to adopt video completion for background modeling and saliency detection in videos captured by moving cameras. The F‐measure results, obtained from the pan‐tilt‐zoom (PTZ) videos, show that our proposed framework outperforms some deep learning‐based background subtraction models by 11% or more. With more challenging videos, our framework also outperforms many high‐ranking background subtraction methods by more than 3%.

Research Area(s)

  • Background modeling, Background subtraction, Foreground segmentation, Mobile camera, PTZ camera, Saliency de-tection

Download Statistics

No data available