Distributed learning for sketched kernel regression

Heng Lian*, Jiamin Liu, Zengyan Fan

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

10 Citations (Scopus)

Abstract

We study distributed learning for regularized least squares regression in a reproducing kernel Hilbert space (RKHS). The divide-and-conquer strategy is a frequently used approach for dealing with very large data sets, which computes an estimate on each subset and then takes an average of the estimators. Existing theoretical constraint on the number of subsets implies the size of each subset can still be large. Random sketching can thus be used to produce the local estimators on each subset to further reduce the computation compared to vanilla divide-and-conquer. In this setting, sketching and divide-and-conquer are complementary to each other in dealing with the large sample size. We show that optimal learning rates can be retained. Simulations are performed to compare sketched and non-standard divide-and-conquer methods.
Original languageEnglish
Pages (from-to)368-376
JournalNeural Networks
Volume143
Online published25 Jun 2021
DOIs
Publication statusPublished - Nov 2021

Research Keywords

  • Distributed learning
  • Kernel method
  • Optimal rate
  • Randomized sketches

Fingerprint

Dive into the research topics of 'Distributed learning for sketched kernel regression'. Together they form a unique fingerprint.

Cite this