Debiased Distributed Learning for Sparse Partial Linear Models in High Dimensions

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

9 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Article number2
Journal / PublicationJournal of Machine Learning Research
Volume23
Online publishedDec 2021
Publication statusPublished - 2022

Link(s)

Abstract

Although various distributed machine learning schemes have been proposed recently for purely linear models and fully nonparametric models, little attention has been paid to distributed optimization for semi-parametric models with multiple structures (e.g. sparsity, linearity and nonlinearity). To address these issues, the current paper proposes a new communication-efficient distributed learning algorithm for sparse partially linear models with an increasing number of features. The proposed method is based on the classical divide and conquer strategy for handling big data and the computation on each subsample consists of a debiased estimation of the doubly regularized least squares approach. With the proposed method, we theoretically prove that our global parametric estimator can achieve the optimal parametric rate in our semi-parametric model given an appropriate partition on the total data. Specifically, the choice of data partition relies on the underlying smoothness of the nonparametric component, and it is adaptive to the sparsity parameter. Finally, some simulated experiments are carried out to illustrate the empirical performances of our debiased technique under the distributed setting.

Research Area(s)

  • Big data, Distributed learning, High dimensions, Reproducing kernel Hilbert space (RKHS), Semi-parametric models

Download Statistics

No data available