Learning from 4D Light Fields for Clear Vision in Poor Visibility Environments
DescriptionAs one of the most important media to record and perceive the real world, images have been an indispensable element of our daily life. Although imaging techniques have made great strides over the last decades, the quality of images captured inpoor visibility environments (PVEs)is still limited, posing severe obstacles to downstream applications. Although plenty of computational methods have been proposed to enhance the quality of traditional 2D RGB images captured in PVEs, their performance is stillunsatisfactorydue to inherent challenges in theseverely ill-posed inverse problems. This project aims to investigate novel machine learning methods for acquiring high-quality image representations of scenes in PVEs, includinglow-light, rainy weather, and underwaterthat are common in daily life. In sharp contrast to existing methods that are largely based on 2D techniques, we will exploreadvanced multi-dimensional imaging, 4D light fields (4D-LFs), which records thetexture, implicit depth, and complementary informationof the 3D scenes by capturing geometrically-structured multiple observations of the same scene, namely, sub-aperture images (SAIs). The rich information embedded in 4D-LFs has the potential to achieve abreakthroughin performance. To fully utilize the high-dimensional information, we will investigate novel disparity/depth estimation modules robust to various degradation effects. To deal with the challenges of modelinghigh-dimensionalvisual signals, we will developend-to-end deep learning-based frameworks, owing to its strong ability on learning representations and high capacity. Moreover, we will leverage the unique knowledge of image degradation processes to guide neural networks’ design, making theminterpretable, compact and computationally efficient.Finally, we will eliminate the burden of obtaining the corresponding ground-truth high-quality/normal images of degenerated images, and developunsupervised/weakly-supervisedmethods to overcome the difficulties in obtaining such data in practice. We will particularly focus on the loss functions by exploring statistic distribution-based metrics and the photometric consistency between SAIs. With our solid backgrounds and promising preliminary verification achieved, we envision that our investigations will provide practical solutions for high-quality imaging towards scenes in PVEs. Because of many unsurpassed advantages of learning based methods, this investigation has a potential to expand the application horizons, including but not limited toautonomous driving, driver assistance, marine resource exploration, and underwater farming/tourism. We believe that beyond the three years envisioned for this project, this project’s novel scientific findings will continuously motivate the research on high-dimensional visual signal modeling and other applications in computer vision and image processing.
|Effective start/end date||1/01/22 → …|