Some Studies on Quantile Tensor Regression
張量分位數回歸的若干問題研究
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution  

Supervisors/Advisors 

Award date  4 Jul 2022 
Link(s)
Permanent Link  https://scholars.cityu.edu.hk/en/theses/theses(84cac67eb9084a5db1e4ca12f93123f7).html 

Other link(s)  Links 
Abstract
Tensor data refers to data in the form of multidimensional arrays, which are widely available in many fields such as medical research, image analysis, recommendation systems, signal processing, network data, etc. With the rapid development of these fields, tensor data analysis has gained more and more attention in recent years. In particular, tensor regression, as an important field in statistics, has attracted much attention in both applied and theoretical research. Meanwhile, as an important class of regression models, quantile regression, is widely used for its robustness in heteroscedasticity, heavytailed errors and outliers, as well as for its advantage of being able to comprehensively model the relationship between covariates and response variables. We study quantile regression model with scalar response and tensor covariates. The high dimensionality and complex structure of tensor data pose great challenges, so we propose methods for estimating the coefficients for linear tensor quantile regression model, and then extend it to nonlinear additive models. The main contents and conclusions of this dissertation are listed as follows.
(1) We study highdimensional quantile regression with tensor covariates and proposed an estimator based on convex regularization. For a general convex decomposable regularizer, we establish the error bound of the estimator under some conditions. Then, two specific cases of sparsity regularizer and lowrankness regularizer are shown to satisfy the conditions of general theorem. Thus, the convergence rates of the two estimators satisfying sparsity and lowrankness can be established, respectively. The performances on finite sample are demonstrated by simulation studies.
(2) We study highdimensional matrix regression with scalar response and matrix covariates, and propose an estimator based on convex regularization. In order to reduce the number of effective parameters, the coefficient matrix is assumed to be lowrank and/or sparse. Thus we impose two regularizers simultaneously to encourage the two lowdimensional structures. The asymptotic properties and implementation based on incremental proximal gradient algorithm are developed. We then apply the proposed estimator to quantile regression with interactions. The advantages of the proposed method in its applications to quadratic regression are also illustrated by simulations and real data analysis.
(3) We propose an estimator based on tensor decomposition for quantile regression with tensor covariates. As a result of decomposition, it effectively reduces the number of parameters, leading to feasible computation using alternating update algorithm. For highdimensional cases in which the dimensionality of coefficient tensor exceeds sample size, we use a sparse Tucker decomposition to further reduce the number of parameters. We propose an alternating update algorithm combined with alternating direction method of multipliers (ADMM). The asymptotic properties of the estimators are established under suitable conditions. The numerical performances are demonstrated via simulations and an application to a crowd density estimation problem.
(4) We study tensor additive quantile regression. We approximate the additive component functions by series expansions with Bspline basis, and organize the spline coefficients into a tensor, which induces a linear tensor regression with transformed predictors. Thus, tensor decomposition techniques can be used for dimension reduction. Moreover, we introduce sparsity assumption on additive components to tackle the highdimensional tensor covariates. With Bspline approximation, the problem of estimating additive component becomes that of estimating the groups of spline coefficients, hence we apply group Lasso penalty to encourage sparsity. The proposed estimators are shown to achieve optimal rate of convergence. For implementation, we apply an ADMM based alternating update algorithm and a convolutional neural network (CNN) based algorithm. Their efficiency is demonstrated through simulation studies.
The main innovations of this dissertation are as follows. Firstly, we propose an estimator for quantile regression with tensor covariates based on regularization. The proposed estimator achieves the same rate of convergence as mean regression. Secondly, for matrix quantile regression which is a special case of tensor regression, we provide an estimator that satisfy sparse and/or lowrank assumption simultaneously. Then, we apply this matrix regression approach to highdimensional quadratic quantile regression, leading to an estimator that does not have to rely on sparsity assumption as conventional quadratic regression approach does. Thirdly, we propose an estimator for quantile tensor regression based on Tucker decomposition, and establish its convergence rate for the first time. Fourthly, we enrich the tensor regression models by studying tensor additive quantile regression. We propose an estimator which achieve the optimal rate of convergence for additive regression model. Moreover, for implementation, we propose algorithms based on alternating update algorithm and neural networks, which are shown to be effective in dealing with large tensor data such as images.
The proposed models characterize the linear or nonlinear relationship between scalar responses and tensor covariates. They enrich the regression models for tensor data, and can be applied to analyze image data and gene data.
(1) We study highdimensional quantile regression with tensor covariates and proposed an estimator based on convex regularization. For a general convex decomposable regularizer, we establish the error bound of the estimator under some conditions. Then, two specific cases of sparsity regularizer and lowrankness regularizer are shown to satisfy the conditions of general theorem. Thus, the convergence rates of the two estimators satisfying sparsity and lowrankness can be established, respectively. The performances on finite sample are demonstrated by simulation studies.
(2) We study highdimensional matrix regression with scalar response and matrix covariates, and propose an estimator based on convex regularization. In order to reduce the number of effective parameters, the coefficient matrix is assumed to be lowrank and/or sparse. Thus we impose two regularizers simultaneously to encourage the two lowdimensional structures. The asymptotic properties and implementation based on incremental proximal gradient algorithm are developed. We then apply the proposed estimator to quantile regression with interactions. The advantages of the proposed method in its applications to quadratic regression are also illustrated by simulations and real data analysis.
(3) We propose an estimator based on tensor decomposition for quantile regression with tensor covariates. As a result of decomposition, it effectively reduces the number of parameters, leading to feasible computation using alternating update algorithm. For highdimensional cases in which the dimensionality of coefficient tensor exceeds sample size, we use a sparse Tucker decomposition to further reduce the number of parameters. We propose an alternating update algorithm combined with alternating direction method of multipliers (ADMM). The asymptotic properties of the estimators are established under suitable conditions. The numerical performances are demonstrated via simulations and an application to a crowd density estimation problem.
(4) We study tensor additive quantile regression. We approximate the additive component functions by series expansions with Bspline basis, and organize the spline coefficients into a tensor, which induces a linear tensor regression with transformed predictors. Thus, tensor decomposition techniques can be used for dimension reduction. Moreover, we introduce sparsity assumption on additive components to tackle the highdimensional tensor covariates. With Bspline approximation, the problem of estimating additive component becomes that of estimating the groups of spline coefficients, hence we apply group Lasso penalty to encourage sparsity. The proposed estimators are shown to achieve optimal rate of convergence. For implementation, we apply an ADMM based alternating update algorithm and a convolutional neural network (CNN) based algorithm. Their efficiency is demonstrated through simulation studies.
The main innovations of this dissertation are as follows. Firstly, we propose an estimator for quantile regression with tensor covariates based on regularization. The proposed estimator achieves the same rate of convergence as mean regression. Secondly, for matrix quantile regression which is a special case of tensor regression, we provide an estimator that satisfy sparse and/or lowrank assumption simultaneously. Then, we apply this matrix regression approach to highdimensional quadratic quantile regression, leading to an estimator that does not have to rely on sparsity assumption as conventional quadratic regression approach does. Thirdly, we propose an estimator for quantile tensor regression based on Tucker decomposition, and establish its convergence rate for the first time. Fourthly, we enrich the tensor regression models by studying tensor additive quantile regression. We propose an estimator which achieve the optimal rate of convergence for additive regression model. Moreover, for implementation, we propose algorithms based on alternating update algorithm and neural networks, which are shown to be effective in dealing with large tensor data such as images.
The proposed models characterize the linear or nonlinear relationship between scalar responses and tensor covariates. They enrich the regression models for tensor data, and can be applied to analyze image data and gene data.