A Misspecification Test for Simulation Metamodels


Student thesis: Doctoral Thesis

View graph of relations


Related Research Unit(s)


Awarding Institution
Award date17 Aug 2018


It is common to analyze real, complex systems using simulation models in the operations research and management science (OR/MS) field. Although computing power and speed continue to increase, simulation and analysis codes continue to grow in complexity and remain computationally expensive, limiting their use in optimization and prediction. Therefore, to represent a true simulation model, many researchers attempt to find some approximating functions---referred to as simulation metamodels---that are inexpensive. Choosing a good metamodel is important for the subsequent analysis, which demonstrates the necessity of metamodel assessment.

In this thesis, we propose a novel misspecification test for simulation metamodel assessment. We first design an error measure called the mean squared discrepancy (MSD), which is used to both characterize the fidelity of the metamodel and derive an unbiased and consistent estimator of the measure. Based on the estimator, we form a consistent test that helps to assess the adequacy of parametric regression metamodels. The test statistic that we construct are demonstrated to be asymptotically normally distributed under the null hypothesis that the parametric regression metamodel is correct, while diverging to infinity at a rate of squared root of n, where n is the test sample size if the metamodel is inadequate. We further suggest a strategy to improve the finite-sample performance of the test. Then we extend the application of our test to form a precise hypothesis test for any fixed metamodel. Furthermore, as a by-product, a confidence interval (CI) estimate is constructed for the MSD.

Preliminary numerical studies are used to examine the performance of our test and the CI we construct. The results are consistent with our theoretical analyses and demonstrate that our test works quite well.