Skip to main navigation Skip to search Skip to main content

Variable selection for high-dimensional varying coefficient partially linear models via nonconcave penalty

Zhaoping Hong, Yuao Hu, Heng Lian*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

In this paper, we consider the problem of simultaneous variable selection and estimation for varying-coefficient partially linear models in a "small n, large p" setting, when the number of coefficients in the linear part diverges with sample size while the number of varying coefficients is fixed. Similar problem has been considered in Lam and Fan (Ann Stat 36(5):2232-2260, 2008) based on kernel estimates for the nonparametric part, in which no variable selection was investigated besides that p was assume to be smaller than n. Here we use polynomial spline to approximate the nonparametric coefficients which is more computationally expedient, demonstrate the convergence rates as well as asymptotic normality of the linear coefficients, and further present the oracle property of the SCAD-penalized estimator which works for p almost as large as exp{n1\2} under mild assumptions. Monte Carlo studies and real data analysis are presented to demonstrate the finite sample behavior of the proposed estimator. Our theoretical and empirical investigations are actually carried out for the generalized varying-coefficient partially linear models, including both Gaussian data and binary data as special cases. © 2012 Springer-Verlag Berlin Heidelberg.
Original languageEnglish
Pages (from-to)887-908
JournalMetrika
Volume76
Issue number7
DOIs
Publication statusPublished - Oct 2013
Externally publishedYes

Research Keywords

  • Bayesian information criterion
  • Cross-validation
  • SCAD penalty

Fingerprint

Dive into the research topics of 'Variable selection for high-dimensional varying coefficient partially linear models via nonconcave penalty'. Together they form a unique fingerprint.

Cite this