Moving least-square method in learning theory

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

14 Scopus Citations
View graph of relations

Author(s)

  • Hong-Yan Wang
  • Dao-Hong Xiang
  • Ding-Xuan Zhou

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)599-614
Journal / PublicationJournal of Approximation Theory
Volume162
Issue number3
Publication statusPublished - Mar 2010

Abstract

Moving least-square (MLS) is an approximation method for data interpolation, numerical analysis and statistics. In this paper we consider the MLS method in learning theory for the regression problem. Essential differences between MLS and other common learning algorithms are pointed out: lack of a natural uniform bound for estimators and the pointwise definition. The sample error is estimated in terms of the weight function and the finite dimensional hypothesis space. The approximation error is dealt with for two special cases for which convergence rates for the total L2 error measuring the global approximation on the whole domain are provided. © 2009 Elsevier Inc. All rights reserved.

Research Area(s)

  • Approximation error, Learning theory, Moving least-square method, Norming condition, Sample error

Citation Format(s)

Moving least-square method in learning theory. / Wang, Hong-Yan; Xiang, Dao-Hong; Zhou, Ding-Xuan.
In: Journal of Approximation Theory, Vol. 162, No. 3, 03.2010, p. 599-614.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review