Kernel gradient descent algorithm for information theoretic learning

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

4 Scopus Citations
View graph of relations

Author(s)

  • Ting Hu
  • Qiang Wu
  • Ding-Xuan Zhou

Detail(s)

Original languageEnglish
Article number105518
Journal / PublicationJournal of Approximation Theory
Volume263
Online published29 Dec 2020
Publication statusPublished - Mar 2021

Abstract

Information theoretic learning is a learning paradigm that uses concepts of entropies and divergences from information theory. A variety of signal processing and machine learning methods fall into this framework. Minimum error entropy principle is a typical one amongst them. In this paper, we study a kernel version of minimum error entropy methods that can be used to find nonlinear structures in the data. We show that the kernel minimum error entropy can be implemented by kernel based gradient descent algorithms with or without regularization. Convergence rates for both algorithms are deduced.

Research Area(s)

  • Gradient descent algorithm, Information theoretic learning, Kernel method, Minimum error entropy, Regularization

Citation Format(s)

Kernel gradient descent algorithm for information theoretic learning. / Hu, Ting; Wu, Qiang; Zhou, Ding-Xuan.
In: Journal of Approximation Theory, Vol. 263, 105518, 03.2021.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review