TY - GEN
T1 - Densely Self-guided Wavelet Network for Image Denoising
AU - Liu, Wei
AU - Yan, Qiong
AU - Zhao, Yuzhi
PY - 2020/6
Y1 - 2020/6
N2 - During the past years, deep convolutional neural networks have achieved impressive success in image denoising. In this paper, we propose a densely self-guided wavelet network (DSWN) for real-world image denoising. The basic structure of DSWN is a top-down self-guidance architecture which is able to efficiently incorporate multi- scale information and extract good local features to recover clean images. Moreover, such a structure requires a smaller number of parameters and enables us to achieve better effectiveness than Unet structure. To avoid information loss and achieve a better receptive field size, we embed wavelet transform into DSWN. In addition, we apply densely residual learning to convolution blocks to enhance the feature extraction capability of the proposed network. At the full resolution level of DSWN, we adopt a double branch structure to generate the final output. One branch of them tends to pay attention to dark areas and the other performs better on bright areas. Such a double branch strategy is able to handle the noise at different exposures. The proposed network is validated by BSD68, Kodak24 and SIDD+ benchmark. Additional experimental results show that the proposed network outperforms most state-of-the-art image denoising solutions.
AB - During the past years, deep convolutional neural networks have achieved impressive success in image denoising. In this paper, we propose a densely self-guided wavelet network (DSWN) for real-world image denoising. The basic structure of DSWN is a top-down self-guidance architecture which is able to efficiently incorporate multi- scale information and extract good local features to recover clean images. Moreover, such a structure requires a smaller number of parameters and enables us to achieve better effectiveness than Unet structure. To avoid information loss and achieve a better receptive field size, we embed wavelet transform into DSWN. In addition, we apply densely residual learning to convolution blocks to enhance the feature extraction capability of the proposed network. At the full resolution level of DSWN, we adopt a double branch structure to generate the final output. One branch of them tends to pay attention to dark areas and the other performs better on bright areas. Such a double branch strategy is able to handle the noise at different exposures. The proposed network is validated by BSD68, Kodak24 and SIDD+ benchmark. Additional experimental results show that the proposed network outperforms most state-of-the-art image denoising solutions.
UR - http://www.scopus.com/inward/record.url?scp=85090133698&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85090133698&origin=recordpage
U2 - 10.1109/CVPRW50498.2020.00224
DO - 10.1109/CVPRW50498.2020.00224
M3 - RGC 32 - Refereed conference paper (with host publication)
SN - 9781728193601
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 1742
EP - 1750
BT - Proceedings - 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops
PB - IEEE Computer Society
T2 - 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2020)
Y2 - 14 June 2020 through 19 June 2020
ER -