Deep Guided Learning for Fast Multi-Exposure Image Fusion

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journal

4 Scopus Citations
View graph of relations

Author(s)

  • Kede Ma
  • Zhengfang Duanmu
  • Hanwei Zhu
  • Yuming Fang
  • Zhou Wang

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article number8906233
Pages (from-to)2808-2819
Journal / PublicationIEEE Transactions on Image Processing
Volume29
Online published19 Nov 2019
Publication statusPublished - 2020

Abstract

We propose a fast multi-exposure image fusion (MEF) method, namely MEF-Net, for static image sequences of arbitrary spatial resolution and exposure number. We first feed a low-resolution version of the input sequence to a fully convolutional network for weight map prediction. We then jointly upsample the weight maps using a guided filter. The final image is computed by a weighted fusion. Unlike conventional MEF methods, MEF-Net is trained end-to-end by optimizing the perceptually calibrated MEF structural similarity (MEF-SSIM) index over a database of training sequences at full resolution. Across an independent set of test sequences, we find that the optimized MEF-Net achieves consistent improvement in visual quality for most sequences, and runs 10 to 1000 times faster than state-of-the-art methods. The code is made publicly available at https://github.com/makedede/MEFNet.

Research Area(s)

  • computational photography, convolutional neural networks, guided filtering, Multi-exposure image fusion

Citation Format(s)

Deep Guided Learning for Fast Multi-Exposure Image Fusion. / Ma, Kede; Duanmu, Zhengfang; Zhu, Hanwei; Fang, Yuming; Wang, Zhou.

In: IEEE Transactions on Image Processing, Vol. 29, 8906233, 2020, p. 2808-2819.

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journal