Abstract
Light field (LF) imaging has emerged as a powerful technique for capturing rich visual information of 3D scenes, enabling various applications including novel view rendering, depth estimation, scene editing, and virtual reality. However, the quality of LF images can be severely compromised at different stages of the imaging pipeline, from the scene to the sensor. Specifically, challenging scene conditions such as low-light and rainy environments result in degraded images, limitations of the camera optical module lead to low spatial resolution, and noise caused by sensors and imaging processes can significantly degrade the final LF image quality.This thesis presents a comprehensive study on enhancing LF imaging quality by addressing the challenges encountered throughout the entire imaging pipeline. We propose a set of novel methods targeting specific issues at each stage of the pipeline: scene-level enhancements including low-light LF enhancement and unsupervised rain-free scene reconstruction, high-resolution LF reconstruction from coded aperture measurements to solve the trade-off between spatial and angular resolution of handheld LF cameras, and an effective denoising framework for sensor-level enhancements. These frameworks are summarized as follows:
To achieve efficient and effective feature embedding, we propose a probabilistic-based feature embedding (PFE), which learns a feature embedding architecture by assembling various low-dimensional convolution patterns in a probability space for fully capturing spatial-angular information. Building upon the proposed PFE, we then leverage the intrinsic linear imaging model of the coded aperture camera to construct a cycle-consistent 4-D LF reconstruction network from coded measurements. Moreover, we incorporate PFE into an iterative optimization framework for 4-D LF denoising. Our extensive experiments demonstrate the significant superiority of our methods on both real-world and synthetic 4-D LF images, both quantitatively and qualitatively, when compared with state-of-the-art methods.
We present a novel and interpretable end-to-end learning framework, called the deep compensation unfolding network (DCUNet), for restoring LF images captured under low-light conditions. DCUNet is designed with a multi-stage architecture that mimics the optimization process of solving an inverse imaging problem in a data-driven fashion. The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result. Additionally, DCUNet includes a content-associated deep compensation module at each optimization stage to suppress noise and illumination map estimation errors. To properly mine and leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module that comprehensively exploits redundant information in LF images. The experimental results on both simulated and real datasets demonstrate the superiority of our DCUNet over state-of-the-art methods, both qualitatively and quantitatively. Moreover, DCUNet preserves the essential geometric structure of enhanced LF images much better.
We propose RainyScape, an unsupervised framework for reconstructing clean scenes from a collection of multi-view rainy images. RainyScape consists of two main modules: a neural rendering module and a rain-prediction module that incorporates a predictor network and a learnable latent embedding that captures the rain characteristics of the scene. Specifically, based on the spectral bias property of neural networks, we first optimize the neural rendering pipeline to obtain a low-frequency scene representation. Subsequently, we jointly optimize the two modules, driven by the proposed adaptive direction-sensitive gradient-based reconstruction loss, which encourages the network to distinguish between scene details and rain streaks, facilitating the propagation of gradients to the relevant components. Extensive experiments on both the classic neural radiance field and the recently proposed 3D Gaussian splatting demonstrate the superiority of our method in effectively eliminating rain streaks and rendering clean images, achieving state-of-the-art performance.
| Date of Award | 28 Aug 2024 |
|---|---|
| Original language | English |
| Awarding Institution |
|
| Supervisor | Junhui HOU (Supervisor) |