On Optimal Learning With Random Features

Jiamin Liu, Heng Lian*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

4 Citations (Scopus)

Abstract

We consider supervised learning in a reproducing kernel Hilbert space (RKHS) using random features. We show that the optimal rate is obtained under suitable regularity conditions, and at the same time improving on the existing bounds on the number of random features required. As a straightforward extension, distributed learning in the simple setting of one-shot communication is also considered that achieves the same optimal rate.
Original languageEnglish
Pages (from-to)9536-9541
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume34
Issue number11
Online published2 Mar 2022
DOIs
Publication statusPublished - Nov 2023

Funding

The work of Heng Lian was supported in part by the NSFC and the Shenzhen Research Institute, City University of Hong Kong, under Project 11871411; and in part by the Hong Kong Research Grants Council (RGC) General Research Fund under Grant 11301718, Grant 11300519, Grant 11300721, and Grant 11311822.

Research Keywords

  • Convergence
  • Distributed learning
  • Hilbert space
  • Kernel
  • kernel method
  • optimal rate
  • random features
  • Standards
  • Supervised learning
  • Time complexity
  • Urban areas

Fingerprint

Dive into the research topics of 'On Optimal Learning With Random Features'. Together they form a unique fingerprint.

Cite this