Generalization and Expressivity for Deep Nets

Shao-Bo Lin*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Along with the rapid development of deep learning in practice, theoretical explanations for its success become urgent. Generalization and expressivity are two widely used measurements to quantify theoretical behaviors of deep nets. The expressivity focuses on finding functions expressible by deep nets but cannot be approximated by shallow nets with similar number of neurons. It usually implies the large capacity. The generalization aims at deriving fast learning rate for deep nets. It usually requires small capacity to reduce the variance. Different from previous studies on deep nets, pursuing either expressivity or generalization, we consider both the factors to explore theoretical advantages of deep nets. For this purpose, we construct a deep net with two hidden layers possessing excellent expressivity in terms of localized and sparse approximation. Then, utilizing the well known covering number to measure the capacity, we find that deep nets possess excellent expressive power (measured by localized and sparse approximation) without essentially enlarging the capacity of shallow nets. As a consequence, we derive near-optimal learning rates for implementing empirical risk minimization on deep nets. These results theoretically exhibit advantages of deep nets from the learning theory viewpoint.
Original languageEnglish
Pages (from-to)1392-1406
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume30
Issue number5
Online published27 Sept 2018
DOIs
Publication statusPublished - May 2019

Research Keywords

  • Biological neural networks
  • Deep learning
  • expressivity
  • generalization
  • Learning systems
  • learning theory
  • localized approximation.
  • Machine learning
  • Neurons
  • Power measurement
  • Risk management
  • Sparse representation

Fingerprint

Dive into the research topics of 'Generalization and Expressivity for Deep Nets'. Together they form a unique fingerprint.

Cite this