Statistical Inference after Model Averaging

Project: Research

View graph of relations


The widespread findings that model averaging can lead to better estimators have attracted a great deal of attention in recent years. When faced with model uncertainty, the traditional approach is to estimate multiple models and employ a model selection method to select the “champion” of all models. Typically, inference is contingent on this champion model, and results are reported without recognition of the fact that model selection is a random event, leading to assertions of model superiority that may not withstand replications. In recent years, model selection has evolved into model averaging. Rather than discontinuously switching between models, a model average smoothly interpolates between models. Averaging shields against the choice of a very poor model, and thus holds promise for reducing estimation risk. Seminal papers by Yang (2001), Hjort & Claeskens (2003), Hansen (2007, 2008) and Hansen & Racine (2012) have sparked a growing enthusiasm for research related to frequentist model averaging, which has resulted in a large body of literature examining model averaging methods oriented towards the achievement of some form of optimality. We (the P.I. and Co-I.’s) have actively pursued research in this area in recent years. Our publications in the general subject of model averaging include several highly cited papers in the Journal of Econometrics, Econometric Theory, Journal of Business and Economic Statistics, Econometric Reviews, Journal of the American Statistical Association, Biometrika and Annals of Statistics,This proposed research seeks to expand our knowledge of model averaging in an important area that has been insufficiently investigated. Specifically, the majority of published studies on frequentist model averaging have emphasized the development of weight choice methods oriented towards achieving some form of optimality for the resultant point estimator. Relatively little is known about how to perform statistical inference based on the model average estimator. To deal with the latter important question we need knowledge on the distributions of model average estimators. For example, to determine the effects of model averaging on the probability content of a confidence interval we need to evaluate the sampling distribution of the model average estimator. Given the additional demands that place on the analysis, it is not surprising that the literature has only paid scant attention to this question. In this project, we investigate post-model averaging inference by deriving the distributions of several information criterion score-based model average estimators. While some progress has been made in understanding the implications of model averaging on inference, the results are largely restricted to those obtained under the unrealistic local misspecification framework, which assumes that all models converge to the smallest model when the sample size is large. In our work, we consider a fixed parameter setup and derive results that have broader applicability than those obtained under local misspecification.We have carried out preliminary theoretical analysis for the project. We are currently fine-tuning our theory by incorporating broader conditions and assumptions to make the analysis more widely applicable. The requested funding is mainly intended for recruiting a research assistant to carry out simulation and real data studies. 


Project number9042873
Grant typeGRF
Effective start/end date1/11/19 → …