Dynamic Scene Deblurring Using Spatially Variant Recurrent Neural Networks

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review

89 Scopus Citations
View graph of relations

Author(s)

  • Jinshan Pan
  • Jimmy Ren
  • Linchao Bao
  • Ming-Hsuan Yang

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publication2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018
Subtitle of host publicationProceedings
PublisherIEEE Computer Society
Pages2521-2529
ISBN (Print)9781538664209
Publication statusPublished - Jun 2018

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

Conference

TitleThe Thirtieth IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2018)
PlaceUnited States
CitySalt Lake City
Period18 - 22 June 2018

Abstract

Due to the spatially variant blur caused by camera shake and object motions under different scene depths, deblurring images captured from dynamic scenes is challenging. Although recent works based on deep neural networks have shown great progress on this problem, their models are usually large and computationally expensive. In this paper, we propose a novel spatially variant neural network to address the problem. The proposed network is composed of three deep convolutional neural networks (CNNs) and a recurrent neural network (RNN). RNN is used as a deconvolution operator performed on feature maps extracted from the input image by one of the CNNs. Another CNN is used to learn the weights for the RNN at every location. As a result, the RNN is spatially variant and could implicitly model the deblurring process with spatially variant kernels. The third CNN is used to reconstruct the final deblurred feature maps into restored image. The whole network is end-to-end trainable. Our analysis shows that the proposed network has a large receptive field even with a small model size. Quantitative and qualitative evaluations on public datasets demonstrate that the proposed method performs favorably against state-of-the-art algorithms in terms of accuracy, speed, and model size.

Citation Format(s)

Dynamic Scene Deblurring Using Spatially Variant Recurrent Neural Networks. / Zhang, Jiawei; Pan, Jinshan; Ren, Jimmy; Song, Yibing; Bao, Linchao; Lau, Rynson W.H.; Yang, Ming-Hsuan.

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018: Proceedings. IEEE Computer Society, 2018. p. 2521-2529 8578365 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review