Approximation Theory of Incremental PCA and Some Kernel-Based Regularization Schemes for Learning
Project: Research
Researcher(s)
- Dingxuan ZHOU (Principal Investigator / Project Coordinator)Department of Data Science
Description
Learning theory provides mathematical foundations of machine learning which studies efficient learning algorithms for analyzing and processing large data in science and technology. Methods and ideas from approximation theory play an important role in learning theory. The goal of this project is approximation theory of incremental principal component analysis and some kernel-based regularization schemes for learning. We shall first carry out error analysis and derive learning rates for the classical incremental principal component analysis. We shall then get confidence-based error bounds for online classification algorithms associated with convex loss functions and prove the almost sure convergence. Approximation analysis for SVM type regularization schemes with additive kernels in additive models will be conducted by means of approximation by integral operators. An interesting approximation theory question concerning Gaussian kernels arising from this study will also be considered. Coefficient-based schemes and robustness of minimum error entropy regularization will be investigated by the scaling operator in wavelet analysis and by the concepts of influence functions. Finally some data analysis problems from bioinformatics will be considered by ideas from learning theory and approximation theory.Detail(s)
Project number | 9042091 |
---|---|
Grant type | GRF |
Status | Finished |
Effective start/end date | 1/01/15 → 12/12/18 |
- approximation theory ,wavelet analysis ,learning theory,reproducing kernel spaces,