Approximation Theory of Structured Deep Nets and Learning

Project: ResearchGRF

View graph of relations

Description

Deep learning has been developed very successfully as a super effective scheme for handling big data practically. It is implemented by deep neural networks or deep nets which often have special structures instead of being fully connected. But there is little work on approximation properties of structured deep nets which is comparable to the classical approximation theory for fully connected neural networks. This project aims at a rigorous approximation theory for some topics of structured deep nets and deep learning. First we plan to establish an approximation theory for convolutional deep nets associated with the rectified linear unit. Expected results include estimates for the complexity of the function space spanned by the components at the last level of hidden units in terms of the convolution kernels, rates of function approximation by the generated output functions, and analysis for the role of pooling in convolutional deep nets and deep learning. Then we plan to carry out wavelet analysis of learning with deep nets: estimating covering numbers of the involved hypothesis spaces to handle the redundancy of distributed representations of deep nets, and conducting error analysis for learning with deep nets of special structures. Finally some stochastic composite mirror descent algorithms including online mirror descent with differentiable mirror maps and composite mirror descent with non-differentiable mirror maps will be studied by analyzing the induced Bregman distances, which will help the theoretical understanding of recurrent deep nets in deep learning.

Detail(s)

StatusActive
Effective start/end date1/01/18 → …