Distributed regularized least squares with flexible Gaussian kernels

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)349-377
Journal / PublicationApplied and Computational Harmonic Analysis
Volume53
Online published29 Mar 2021
Publication statusPublished - Jul 2021

Abstract

We propose a distributed learning algorithm for least squares regression in reproducing kernel Hilbert spaces (RKHSs) generated by flexible Gaussian kernels, based on a divide-and-conquer strategy. Our study demonstrates that Gaussian kernels with flexible variances greatly improve the learning performance of distributed algorithms generated by a fixed Gaussian. Under some mild conditions, we establish sharp error bounds for the distributed algorithm with labeled data in which the variance of the Gaussian kernel serves as a tuning parameter. We show that with suitably chosen parameters our error rates can be almost mini-max optimal under the standard Sobolev smoothness condition on the target function. By utilizing additional information of unlabeled data for semi-supervised learning, we relax the restrictions on the number of data partition and the range of the Sobolev smoothness index.

Research Area(s)

  • Distributed learning, Flexible Gaussian kernels, Reproducing kernel Hilbert space, Semi-supervised learning, Sobolev space