RCNet: Deep Recurrent Collaborative Network for Multi-View Low-Light Image Enhancement

Hao Luo, Baoliang Chen, Lingyu Zhu, Peilin Chen, Shiqi Wang*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Scene observation from multiple perspectives would bring a more comprehensive visual experience. However, in the context of acquiring multiple views in the dark, the highly correlated views are seriously alienated, making it challenging to improve scene understanding with auxiliary views. Recent single image-based enhancement methods may not be able to provide consistently desirable restoration performance for all views due to the ignorance of potential feature correspondence among different views. To alleviate this issue, we make the first attempt to investigate multi-view low-light image enhancement. First, we construct a new dataset called Multi-View Low-light Triplets (MVLT), including 1,860 pairs of triple images with large illumination ranges and wide noise distribution. Each triplet is equipped with three different viewpoints towards the same scene. Second, we propose a deep multi-view enhancement framework based on the Recurrent Collaborative Network (RCNet). Specifically, in order to benefit from similar texture correspondence across different views, we design the recurrent feature enhancement, alignment and fusion (ReEAF) module, in which intra-view feature enhancement (Intra-view EN) followed by inter-view feature alignment and fusion (Inter-view AF) is performed to model the intra-view and inter-view feature propagation sequentially via multi-view collaboration. In addition, two different modules from enhancement to alignment (E2A) and from alignment to enhancement (A2E) are developed to enable the interactions between Intra-view EN and Inter-view AF, which explicitly utilize attentive feature weighting and sampling for enhancement and alignment, respectively. Experimental results demonstrate that our RCNet significantly outperforms other state-of-the-art methods. All of our dataset, code, and model will be available at https://github.com/hluo29/RCNet. © 2025 IEEE.
Original languageEnglish
Number of pages14
JournalIEEE Transactions on Multimedia
DOIs
Publication statusOnline published - 2 Jan 2025

Funding

This work was supported in part by ITF Project GHP/044/21SZ, in part by RGC General Research Fund 11203220/11200323, and in part by the National Natural Science Foundation of China under Grant 62401214.

Research Keywords

  • collaborative network
  • inter-view alignment & fusion
  • intra-view enhancement
  • Multi-view low-light enhancement

Fingerprint

Dive into the research topics of 'RCNet: Deep Recurrent Collaborative Network for Multi-View Low-Light Image Enhancement'. Together they form a unique fingerprint.

Cite this