A combination of background modeler and encoder-decoder CNN for background/foreground segregation in image sequence

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

1 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)1297–1304
Journal / PublicationSignal, Image and Video Processing
Volume17
Issue number4
Online published12 Aug 2022
Publication statusPublished - Jun 2023

Abstract

Detection of visual change or anomaly in the image sequence is a common computer vision problem that can be formulated as background/foreground segregation. To achieve this, the background model is generated and the target (foreground) is detected via background subtraction. We propose a framework for visual change detection with three main modules: background modeler, convolutional neural network, and feedback scheme for background model updating. Through analysis of a short image sequence, the background modeler can generate one image which represents the background of that video. The background image frame and individual frames of the image sequence are input to the convolutional neural network for background/foreground segregation. We design an encoder-decoder convolutional neural network which produces a binary segmentation map. The output indicates the regions of visual change in the current image frame. For long-term analysis, maintenance of the background model is needed. A feedback scheme is proposed that can dynamically update the colors of the background frame. The results, obtained from the benchmark dataset, show that our proposed framework outperforms many high-ranking background subtraction algorithms by 9.9% or more. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022.

Research Area(s)

  • Anomaly detection, Background generation, Background subtraction, Change detection, Deep learning network