Image Illumination Modeling and Processing
圖像光照建模與處理算法研究
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 6 Feb 2018 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(345b2a70-852d-47ad-b8c2-96bd3bc99902).html |
---|---|
Other link(s) | Links |
Abstract
As a main factor of imaging, the complex illumination variation often causes a lot of problems for a variety of computer vision tasks and its applications, such as object recognition and tracking and scene understanding. It degrades the performance of the algorithms in computer vision tasks, and impedes the adaptiveness of the algorithm to different and complex illuminant conditions. Image illumination modelling and processing are of great practical significance and have attracted a lot of attentions in recent years. For image illumination processing, in previous methods, a lot of efforts have been made to design some hand-crafted illumination invariant features, and then some statistical learning based classifiers are applied to combine these features to reduce the bad effect of illumination variation on computer vision tasks. These hand-crafted features often lack robustness for different and complex illuminant conditions, or they may be valid only for some specific situations. In this dissertation, instead of designing purely image data based illumination invariant features, more universal and practical illumination models are deduced and proposed. Then, based on these illumination models, some algorithms are proposed to handle the problems in computer vision caused by illumination variations.
In this dissertation, we investigate a number of illumination related issues, including image illumination modeling, illuminant invariant images, shadow removal algorithms, and the applications of these illuminant algorithms in several computer vision tasks. In each topic, extensive experimental comparisons with state-of-the-art methods are presented to validate the proposed models and algorithms. The main research contributions of this dissertation can be summarized as follows:
1. From the view of local illumination variation, a comprehensive evaluation of shadow features, which are commonly applied in shadow processing, are presented. In the evaluation, the performance ranking, the limitations and the effectiveness of different shadow features are presented. The feature analyses and experiment comparisons show that these purely image data based shadow features are often ambiguous and can not characterize the specificity of shadow regions effectively. To the best of our knowledge, this is the first work to evaluate shadow features, which can offer guidance for future illumination modeling and shadow processing algorithms.
2. For outdoor illumination, a novel and effective physically based illumination model is proposed. In this model, for each RGB pixel value, a linear equation set is set up to describe its variation caused by different illuminations. Through orthogonal decomposition of the solution space of this linear equation set, a color image can be directly decomposed into an illumination invariant image and the illuminant intensity. With rigorous mathematical deduction, this color illumination invariant image has an explicit and simple mathematical expression, and can be applied directly to real-time applications to resist illumination variation.
3. To handle the local illumination variation caused by shadows due to all types of light sources, a universal deep learning based illumination model (DeshadowNet) is proposed. This DeshadowNet takes a single shadow image as input, and directly models the mapping function between a shadow image and its illumination attenuation effects, which can then be directly used to recover a shadow-free image by a pixel-wise linear operation. DeshadowNet is designed with a multi-context mechanism and trained in a unified manner. It does not impose any assumptions on the light sources nor require a separate shadow detection. Thus, it is adaptive to shadows caused by various types of light sources, and works well for shadows with widely varying penumbra widths.
4. Object color constancy and illumination invariant based RGBD salient object detection methods are proposed to show the effectiveness of our physically based and deep learning related illumination model.
In this dissertation, we investigate a number of illumination related issues, including image illumination modeling, illuminant invariant images, shadow removal algorithms, and the applications of these illuminant algorithms in several computer vision tasks. In each topic, extensive experimental comparisons with state-of-the-art methods are presented to validate the proposed models and algorithms. The main research contributions of this dissertation can be summarized as follows:
1. From the view of local illumination variation, a comprehensive evaluation of shadow features, which are commonly applied in shadow processing, are presented. In the evaluation, the performance ranking, the limitations and the effectiveness of different shadow features are presented. The feature analyses and experiment comparisons show that these purely image data based shadow features are often ambiguous and can not characterize the specificity of shadow regions effectively. To the best of our knowledge, this is the first work to evaluate shadow features, which can offer guidance for future illumination modeling and shadow processing algorithms.
2. For outdoor illumination, a novel and effective physically based illumination model is proposed. In this model, for each RGB pixel value, a linear equation set is set up to describe its variation caused by different illuminations. Through orthogonal decomposition of the solution space of this linear equation set, a color image can be directly decomposed into an illumination invariant image and the illuminant intensity. With rigorous mathematical deduction, this color illumination invariant image has an explicit and simple mathematical expression, and can be applied directly to real-time applications to resist illumination variation.
3. To handle the local illumination variation caused by shadows due to all types of light sources, a universal deep learning based illumination model (DeshadowNet) is proposed. This DeshadowNet takes a single shadow image as input, and directly models the mapping function between a shadow image and its illumination attenuation effects, which can then be directly used to recover a shadow-free image by a pixel-wise linear operation. DeshadowNet is designed with a multi-context mechanism and trained in a unified manner. It does not impose any assumptions on the light sources nor require a separate shadow detection. Thus, it is adaptive to shadows caused by various types of light sources, and works well for shadows with widely varying penumbra widths.
4. Object color constancy and illumination invariant based RGBD salient object detection methods are proposed to show the effectiveness of our physically based and deep learning related illumination model.
- image illumination modeling, image illumination processing, illumination invariant features, shadow removal, salient object detection, deep learning