Multi-Exposure Decomposition-Fusion Model for High Dynamic Range Image Saliency Detection

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journal

View graph of relations

Author(s)

  • Xu Wang
  • Zhenhao Sun
  • Yuming Fang
  • Lin Ma

Related Research Unit(s)

Detail(s)

Original languageEnglish
Number of pages12
Journal / PublicationIEEE Transactions on Circuits and Systems for Video Technology
Publication statusOnline published - 3 Apr 2020

Abstract

High dynamic range (HDR) imaging techniques have witnessed a great improvement in the past few decades. However, saliency detection task on HDR content is still far from well explored. In this paper, we introduce a multi-exposure decomposition-fusion model for HDR image saliency detection inspired by the brightness adaption mechanism. The proposed model is composed of three modules. Firstly, a decomposition module converts the input raw HDR image into a stack of LDR images by uniformly sampling the exposure time range. Secondly, a saliency region proposal network is employed to generate the candidate saliency maps for each LDR image in the exposure stack. Finally, an uncertainty weighting based fusion algorithm is applied to generate the overall saliency map for the input HDR image by merging the obtained LDR saliency maps. Extensive experiments show that our proposed model achieves superior performance compared with the state-of-theart methods on the existing HDR eye fixation databases. The source code of the proposed model are made publicly available at https://github.com/sunnycia/DFHSal.

Research Area(s)

  • High dynamic range, brightness adaptation, image saliency detection, deep learning

Citation Format(s)