Abstract
Contaminants such as dust, dirt and moisture adhering to the camera lens can greatly affect the quality and clarity of the resulting image or video. In this paper, we propose a video restoration method to automatically remove these contaminants and produce a clean video. Our approach first seeks to detect attention maps that indicate the regions that need to be restored. In order to leverage the corresponding clean pixels from adjacent frames, we propose a flow completion module to hallucinate the flow of the background scene to the attention regions degraded by the contaminants. Guided by the attention maps and completed flows, we propose a recurrent technique to restore the input frame by fetching clean pixels from adjacent frames. Finally, a multi-frame processing stage is used to further process the entire video sequence in order to enforce temporal consistency. The entire network is trained on a synthetic dataset that approximates the physical lighting properties of contaminant artifacts. This new dataset and our novel framework lead to our method that is able to address different contaminants and outperforms competitive restoration approaches both qualitatively and quantitatively.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the IEEE/CVF International Conference on Computer Vision 2021 (ICCV) |
| Publisher | IEEE |
| Pages | 1991-2000 |
| Number of pages | 10 |
| ISBN (Print) | 9781665428125 |
| DOIs | |
| Publication status | Published - 11 Oct 2021 |
| Event | IEEE International Conference on Computer Vision 2021 - Virtual Duration: 11 Oct 2021 → 17 Oct 2021 https://iccv2021.thecvf.com/ https://openaccess.thecvf.com/ICCV2021 |
Publication series
| Name | Proceedings of the IEEE International Conference on Computer Vision |
|---|---|
| ISSN (Print) | 1550-5499 |
Conference
| Conference | IEEE International Conference on Computer Vision 2021 |
|---|---|
| Abbreviated title | ICCV 2021 |
| Period | 11/10/21 → 17/10/21 |
| Internet address |
Bibliographical note
Research Unit(s) information for this publication is provided by the author(s) concerned.RGC Funding Information
- RGC-funded