Image Correction via Deep Reciprocating HDR Transformation

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review

48 Scopus Citations
View graph of relations

Author(s)

  • Xin Yang
  • Qiang Zhang
  • Xiaopeng Wei

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publication2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018
Subtitle of host publicationProceedings
PublisherIEEE Computer Society
Pages1798-1807
ISBN (Print)9781538664209
Publication statusPublished - Jun 2018

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

Conference

Title31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2018)
LocationCalvin L. Rampton Salt Palace Convention Center
PlaceUnited States
CitySalt Lake City
Period18 - 22 June 2018

Abstract

Image correction aims to adjust an input image into a visually pleasing one. Existing approaches are proposed mainly from the perspective of image pixel manipulation. They are not effective to recover the details in the under/over exposed regions. In this paper, we revisit the image formation procedure and notice that the missing details in these regions exist in the corresponding high dynamic range (HDR) data. These details are well perceived by the human eyes but diminished in the low dynamic range (LDR) domain because of the tone mapping process. Therefore, we formulate the image correction task as an HDR transformation process and propose a novel approach called Deep Reciprocating HDR Transformation (DRHT). Given an input LDR image, we first reconstruct the missing details in the HDR domain. We then perform tone mapping on the predicted HDR data to generate the output LDR image with the recovered details. To this end, we propose a united framework consisting of two CNNs for HDR reconstruction and tone mapping. They are integrated end-to-end for joint training and prediction. Experiments on the standard benchmarks demonstrate that the proposed method performs favorably against state-of-the-art image correction methods.

Bibliographic Note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Citation Format(s)

Image Correction via Deep Reciprocating HDR Transformation. / Yang, Xin; Xu, Ke; Song, Yibing; Zhang, Qiang; Wei, Xiaopeng; Lau, Rynson W.H.

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018: Proceedings. IEEE Computer Society, 2018. p. 1798-1807 8578291 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review