Sparse additive support vector machines in bounded variation space

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article numberiaae003
Number of pages29
Journal / PublicationInformation and Inference
Volume13
Issue number1
Online published8 Feb 2024
Publication statusPublished - Mar 2024

Abstract

We propose the total variation penalized sparse additive support vector machine (TVSAM) for performing classification in the high-dimensional settings, using a mixed l1-type functional regularization scheme to induce sparsity and smoothness simultaneously. We establish a representer theorem for TVSAM, which turns the infinite-dimensional problem into a finite-dimensional one, thereby providing computational feasibility. Even for the least squares loss, our result fills a gap in the literature when compared with the existing representer theorem. Theoretically, we derive some risk bounds for TVSAM under both exact sparsity and near sparsity, and with arbitrarily specified internal knots. In this process, we develop an important interpolation inequality for the space of functions of bounded variation, relying on analytic techniques such as mollification and partition of unity. An efficient implementation based on the alternating direction method of multipliers is employed. © 2024 Oxford University Press. All rights reserved.

Research Area(s)

  • additive models, empirical norm penalty, high dimensionality, SVM, total variation penalty