TY - GEN
T1 - HD-Net
T2 - IEEE 24th International Workshop on Multimedia Signal Processing (MMSP)
AU - Hu, Kejian
AU - Zhang, Zhichen
AU - Cai, Xiaowen
AU - Chen, Xiang
AU - Jiang, Nanfeng
AU - Zhou, Yu
AU - Zhao, Tiesong
PY - 2022
Y1 - 2022
N2 - Rain streaks usually result in severe image visual degradation and foreground occlusion, affecting the quality of computer tasks in outdoor scenes. Currently, the mainstream methods in single-image deraining are based on data-driven. However, the deep learning network could be imperfect, with limited power for learning the global information from rain streaks all over the map. In order to solve this problem, we proposed a novel Hierarchical Distillation Network (HDNet). In this network, Hierarchical Feature Extraction Block (HFEB) can fully utilize the Transformer's learning ability in high-level features, integrate local detail extraction and global structure representation, and compensate for the weakness of the Convolutional Neural Network (CNN), which is overattentive to the underlying image features. Furthermore, the Distillation-Calibration Block (DCB) are adopted to avoid feature redundancy during model training and calibrate the channel and spatial information through the feature transmission, which could significantly improve the learning efficiency. Finally, the experiment results show that our model performs better than traditional CNN models and state-of-the-art methods. © 2022 IEEE.
AB - Rain streaks usually result in severe image visual degradation and foreground occlusion, affecting the quality of computer tasks in outdoor scenes. Currently, the mainstream methods in single-image deraining are based on data-driven. However, the deep learning network could be imperfect, with limited power for learning the global information from rain streaks all over the map. In order to solve this problem, we proposed a novel Hierarchical Distillation Network (HDNet). In this network, Hierarchical Feature Extraction Block (HFEB) can fully utilize the Transformer's learning ability in high-level features, integrate local detail extraction and global structure representation, and compensate for the weakness of the Convolutional Neural Network (CNN), which is overattentive to the underlying image features. Furthermore, the Distillation-Calibration Block (DCB) are adopted to avoid feature redundancy during model training and calibrate the channel and spatial information through the feature transmission, which could significantly improve the learning efficiency. Finally, the experiment results show that our model performs better than traditional CNN models and state-of-the-art methods. © 2022 IEEE.
KW - Image deraining
KW - Transfomer
KW - Feature distillation
KW - Coordinate calibration
UR - http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=LinksAMR&SrcApp=PARTNER_APP&DestLinkType=FullRecord&DestApp=WOS&KeyUT=000893205800012
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85143587655&origin=recordpage
UR - http://www.scopus.com/inward/record.url?scp=85143587655&partnerID=8YFLogxK
U2 - 10.1109/MMSP55362.2022.9948739
DO - 10.1109/MMSP55362.2022.9948739
M3 - RGC 32 - Refereed conference paper (with host publication)
T3 - IEEE International Workshop on Multimedia Signal Processing
BT - 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP)
PB - IEEE
Y2 - 26 September 2022 through 28 September 2022
ER -