Читать книгу Computational Statistics in Data Science - Группа авторов - Страница 116
1 Introduction
ОглавлениеMonte Carlo simulation methods generate observations from a chosen distribution in an effort to estimate unknowns of that distribution. A rich variety of methods fall under this characterization, including classical Monte Carlo simulation, Markov chain Monte Carlo (MCMC), importance sampling, and quasi‐Monte Carlo.
Consider a distribution defined on a ‐dimensional space , and suppose that are features of interest of . Specifically, may be a combination of quantiles, means, and variances associated with . Samples are obtained via simulation either approximately or exactly from , and a consistent estimator of , , is constructed so that, as ,
Thus, even when is a complicated distribution, Monte Carlo simulation allows for estimation of features of . Throughout, we assume that either independent and identically distributed (IID) samples or MCMC samples from can be obtained efficiently; see Refs [1–5] for various techniques.
The foundation of Monte Carlo simulation methods rests on asymptotic convergence as indicated by (1). When enough samples are obtained, , and simulation can be terminated with reasonable confidence. For many estimators, an asymptotic sampling distribution is available in order to ascertain the variability in estimation via a central limit theorem (CLT) or application of the delta method on a CLT. Section 2 introduces estimators of , while Section 3 discusses sampling distributions of these estimators for IID and MCMC sampling.
Although Monte Carlo simulation relies on large‐sample frequentist statistics, it is fundamentally different in two ways. First, data is generated by a computer, and so often there is little cost to obtaining further samples. Thus, the reliance on asymptotics is reasonable. Second, data is obtained sequentially, so determining when to terminate the simulation can be based on the samples already obtained. As this implies a random simulation time, additional safeguards are necessary to ensure asymptotic validity. This has led to the study of sequential stopping rules, which we present in Section 5.
Sequential stopping rules rely on estimating the limiting Monte Carlo variance–covariance matrix (when , this is the standard error of ). This is a particularly challenging problem in MCMC due to serial correlation in the samples. We discuss these challenges in Section 4 and present estimators appropriate for large simulation sizes.
Over a variety of examples in Section 7, we conclude that the simulation size required for a reliable estimation is often higher than what is commonly used by practitioners (see also Refs [6, 7]. Given modern computational power, the recommended strategies can easily be adopted in most estimation problems. We conclude the introduction with an example illustrating the need for careful sample size calculations.
Example 1. Consider IID draws . An estimate of is , and is estimated with the sample variance, . Let be the th quantile of a standard normal distribution, for . A large‐sample confidence interval for is
Confidence intervals are notoriously difficult to understand at a first instance, and thus a standard Monte Carlo experiment in an introductory statistics course is that of repeating the above experiment multiple times and illustrating that on average about proportion of such confidence intervals will contain the true mean. That is, for , we generate , calculate the mean and the sample variance , and define to be
where is the indicator function. By the law of large numbers, with probability 1, as , and the following CLT holds:
In conducting this experiment, we must choose the Monte Carlo sample size . A reasonable argument here is that our estimator must be accurate up to the second significant digit with roundoff. That is, we may allow a margin of error of 0.005. This implies that must be chosen so that
That is, to construct, say a confidence interval, an accurate Monte Carlo study in this simple example requires at least 1900 Monte Carlo samples. A higher precision would require an even larger simulation size! This is an example of an absolute precision stopping rule (Section 5 ) and is unique since the limiting variance is known. For further discussion of this example, see Frey [8].