Universal Consistency of Deep Convolutional Neural Networks

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

6 Scopus Citations
View graph of relations

Author(s)

  • Shao-Bo Lin
  • Kaidong Wang
  • Yao Wang
  • Ding-Xuan Zhou

Detail(s)

Original languageEnglish
Pages (from-to)4610-4617
Journal / PublicationIEEE Transactions on Information Theory
Volume68
Issue number7
Online published16 Feb 2022
Publication statusPublished - Jul 2022

Abstract

Compared with avid research activities of deep convolutional neural networks (DCNNs) in practice, the study of theoretical behaviors of DCNNs lags heavily behind. In particular, the universal consistency of DCNNs remains open. In this paper, we prove that implementing empirical risk minimization on DCNNs with expansive convolution (with zero-padding) is strongly universally consistent. Motivated by the universal consistency, we conduct a series of experiments to show that without any fully connected layers, DCNNs with expansive convolution perform not worse than the widely used deep neural networks with hybrid structure containing contracting (without zero-padding) convolutional layers and several fully connected layers.

Research Area(s)

  • Convolution, convolutional neural networks, Convolutional neural networks, Deep learning, Feature extraction, Risk management, Sparse matrices, universal consistency, Urban areas

Citation Format(s)

Universal Consistency of Deep Convolutional Neural Networks. / Lin, Shao-Bo; Wang, Kaidong; Wang, Yao et al.
In: IEEE Transactions on Information Theory, Vol. 68, No. 7, 07.2022, p. 4610-4617.

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review