Theory of Deep Learning: from CNNs to RNNs

Project: Research

View graph of relations

Description

Deep learning has been widely applied to processing big data from many practical domains. Its success stems from various families of deep neural networks with network architectures and structures designed adaptively according to different applications. Compared with its power in practice and discussions on computing issues, it lacks a solid mathematical foundation and enough theoretical understanding of the structured networks in representing and approximating functions. This project is concerned with approximation theory of two important families of deep neural networks, deep convolutional neural networks (CNNs) and deep recurrent neural networks (RNNs). Our first purpose is to study further the superiority of deep CNNs in approximating structured functions and functionals, with structure features induced by filters and sparsity in the time-frequency domain. Our second purpose is to consolidate the theory of deep RNNs in approximating sequence-to-sequence maps with explicit rates of approximation when various activation functions are used. Our last purpose is to consider some deep learning problems in practical applications and some related theoretical questions. For the multi-class classification arising from readability of Chinese texts, we propose a new framework of adjacent-level accuracy and conduct error analysis together with readability assessments. For some deep RNNs processing sequential data in finance, we consider a gated activation consisting of filter and gate parts, and plan to study the efficiency of the induced deep learning algorithms. 

Detail(s)

Project number9043219
Grant typeGRF
StatusActive
Effective start/end date1/01/22 → …