Saliency Detection using Deep Features and Affinity-based Robust Background Subtraction

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

24 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)2902-2916
Journal / PublicationIEEE Transactions on Multimedia
Volume23
Online published28 Aug 2020
Publication statusPublished - 2021

Abstract

Most existing saliency methods measure foreground saliency by using the contrast of a foreground region to its local context, or boundary priors and spatial compactness. These methods are not powerful enough to extract a precise salient region from noisy and cluttered backgrounds. To evaluate the contrast of salient and background regions effectively, we consider high-level features from both supervised and unsupervised methods. We propose an affinity-based robust background subtraction technique and maximum attention map using a pre-trained convolution neural network. This affinity-based technique uses pixel similarities to propagate the values of salient pixels among foreground and background regions and their union. The salient pixel value controls the foreground and background information by using multiple pixel affinities. The maximum attention map is derived from the convolution neural network using features of the Pooling and Relu layers. This method can detect salient regions from images that have noisy and cluttered backgrounds. Our experimental results demonstrate the effectiveness of the proposed approach on six different saliency data sets and benchmarks and show that it improves the quality of detection beyond current saliency detection methods.

Research Area(s)

  • affinity matrix, Attention map, background subtraction, convolution neural network, salient region

Bibliographic Note

Information for this record is supplemented by the author(s) concerned.