Optimal learning rates for distribution regression
Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Article number | 101426 |
Journal / Publication | Journal of Complexity |
Volume | 56 |
Online published | 20 Aug 2019 |
Publication status | Published - Feb 2020 |
Link(s)
Abstract
We study a learning algorithm for distribution regression with regularized least squares. This algorithm, which contains two stages of sampling, aims at regressing from distributions to real valued outputs. The first stage sample consists of distributions and the second stage sample is obtained from these distributions. To extract information from samples, we embed distributions to a reproducing kernel Hilbert space (RKHS) and use the second stage sample to form the regressor by a tool of mean embedding. We show error bounds in the L2-norm and prove that the regressor is a good approximation to the regression function. We derive a learning rate which is optimal in the setting of standard least squares regression and improve the existing work. Our analysis is achieved by using a novel second order decomposition to bound operator norms.
Research Area(s)
- Distribution regression, Integral operator, Mean embedding, Optimal learning rate, Reproducing kernel Hilbert space
Citation Format(s)
Optimal learning rates for distribution regression. / Fang, Zhiying; Guo, Zheng-Chu; Zhou, Ding-Xuan.
In: Journal of Complexity, Vol. 56, 101426, 02.2020.Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review