Universal Consistency of Deep Convolutional Neural Networks
Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 4610-4617 |
Journal / Publication | IEEE Transactions on Information Theory |
Volume | 68 |
Issue number | 7 |
Online published | 16 Feb 2022 |
Publication status | Published - Jul 2022 |
Link(s)
Abstract
Compared with avid research activities of deep convolutional neural networks (DCNNs) in practice, the study of theoretical behaviors of DCNNs lags heavily behind. In particular, the universal consistency of DCNNs remains open. In this paper, we prove that implementing empirical risk minimization on DCNNs with expansive convolution (with zero-padding) is strongly universally consistent. Motivated by the universal consistency, we conduct a series of experiments to show that without any fully connected layers, DCNNs with expansive convolution perform not worse than the widely used deep neural networks with hybrid structure containing contracting (without zero-padding) convolutional layers and several fully connected layers.
Research Area(s)
- Convolution, convolutional neural networks, Convolutional neural networks, Deep learning, Feature extraction, Risk management, Sparse matrices, universal consistency, Urban areas
Citation Format(s)
Universal Consistency of Deep Convolutional Neural Networks. / Lin, Shao-Bo; Wang, Kaidong; Wang, Yao et al.
In: IEEE Transactions on Information Theory, Vol. 68, No. 7, 07.2022, p. 4610-4617.
In: IEEE Transactions on Information Theory, Vol. 68, No. 7, 07.2022, p. 4610-4617.
Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review