Approximation Theory of Incremental PCA and Some Kernel-Based Regularization Schemes for Learning

Project: Research

View graph of relations

Researcher(s)

Description

Learning theory provides mathematical foundations of machine learning which studies efficient learning algorithms for analyzing and processing large data in science and technology. Methods and ideas from approximation theory play an important role in learning theory. The goal of this project is approximation theory of incremental principal component analysis and some kernel-based regularization schemes for learning. We shall first carry out error analysis and derive learning rates for the classical incremental principal component analysis. We shall then get confidence-based error bounds for online classification algorithms associated with convex loss functions and prove the almost sure convergence. Approximation analysis for SVM type regularization schemes with additive kernels in additive models will be conducted by means of approximation by integral operators. An interesting approximation theory question concerning Gaussian kernels arising from this study will also be considered. Coefficient-based schemes and robustness of minimum error entropy regularization will be investigated by the scaling operator in wavelet analysis and by the concepts of influence functions. Finally some data analysis problems from bioinformatics will be considered by ideas from learning theory and approximation theory.

Detail(s)

Project number9042091
Grant typeGRF
StatusFinished
Effective start/end date1/01/1512/12/18

    Research areas

  • approximation theory ,wavelet analysis ,learning theory,reproducing kernel spaces,