Distributed kernel gradient descent algorithm for minimum error entropy principle

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

12 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)229-256
Journal / PublicationApplied and Computational Harmonic Analysis
Volume49
Issue number1
Online published15 Jan 2019
Publication statusPublished - Jul 2020

Abstract

Distributed learning based on the divide and conquer approach is a powerful tool for big data processing. We introduce a distributed kernel gradient descent algorithm for the minimum error entropy principle and analyze its convergence. We show that the L2 error decays at a minimax optimal rate under some mild conditions. As a tool we establish some concentration inequalities for U-statistics which play pivotal roles in our error analysis.

Research Area(s)

  • Distributed learning, Gradient descent algorithm, Kernel method, Minimum error entropy