Distributed Kernel Ridge Regression with Communications

Shao-Bo Lin, Di Wang*, Ding-Xuan Zhou

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

35 Citations (Scopus)
57 Downloads (CityUHK Scholars)

Abstract

This paper focuses on generalization performance analysis for distributed algorithms in the framework of learning theory. Taking distributed kernel ridge regression (DKRR) for example, we succeed in deriving its optimal learning rates in expectation and providing theoretically optimal ranges of the number of local processors. Due to the gap between theory and experiments, we also deduce optimal learning rates for DKRR in probability to essentially reflect the generalization performance and limitations of DKRR. Furthermore, we propose a communication strategy to improve the learning performance of DKRR and demonstrate the power of communications in DKRR via both theoretical assessments and numerical experiments.
Original languageEnglish
Article number93
Number of pages38
JournalJournal of Machine Learning Research
Volume21
Online publishedApr 2020
Publication statusPublished - 2020

Research Keywords

  • Communication
  • Distributed learning
  • Kernel ridge regression
  • Learning theory

Publisher's Copyright Statement

  • This full text is made available under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/

Fingerprint

Dive into the research topics of 'Distributed Kernel Ridge Regression with Communications'. Together they form a unique fingerprint.

Cite this