Читать книгу Computational Statistics in Data Science - Группа авторов - Страница 137

References

Оглавление

1 1 Caffo, B.S., Booth, J.G., and Davison, A.C. (2002) Empirical sup rejection sampling. Biometrika, 89, 745–754.

2 2 Chib, S. and Greenberg, E. (1995) Understanding the Metropolis‐Hastings algorithm. Am. Stat., 49, 327–335.

3 3 Fishman, G.S. (1996) Monte Carlo: Concepts, Algorithms, and Applications, Springer, New York.

4 4 Robert, C.P. and Casella, G. (2013) Monte Carlo Statistical Methods, Springer, New York.

5 5 Robert, C.P., Elvira, V., Tawn, N., and Wu, C. (2018) Accelerating MCMC algorithms. Wiley Interdiscip. Rev. Comput. Stat., 10, e1435.

6 6 Flegal, J.M., Haran, M., and Jones, G.L. (2008) Markov chain Monte Carlo: can we trust the third significant figure? Stat. Sci., 23, 250–260.

7 7 Koehler, E., Brown, E., and Haneuse, S.J.‐P. (2009) On the assessment of Monte Carlo error in simulation‐based statistical analyses. Am. Stat., 63, 155–162.

8 8 Frey, J. (2010) Fixed‐width sequential confidence intervals for a proportion. Am. Stat., 64, 242–249.

9 9 Roberts, G.O. and Rosenthal, J.S. (2004) General state space Markov chains and MCMC algorithms. Probab. Surv., 1, 20–71.

10 10 Robertson, N., Flegal, J.M., Vats, D., and Jones, G.L. (2019) Assessing and visualizing simultaneous simulation error. arXiv preprint arXiv:1904.11912.

11 11 Doss, C.R., Flegal, J.M., Jones, G.L., and Neath, R.C. (2014) Markov chain Monte Carlo estimation of quantiles. Electron. J. Stat., 8, 2448–2478.

12 12 Brooks, S.P. and Gelman, A. (1998) General methods for monitoring convergence of iterative simulations. J. Comput. Graph. Stat., 7, 434–455.

13 13 Jones, G.L. (2004) On the Markov chain central limit theorem. Probab. Surv., 1, 299–320.

14 14 Andrews, D.W. (1991) Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica, 59, 817–858.

15 15 Vats, D., Flegal, J.M., and Jones, G.L. (2018) Strong consistency of multivariate spectral variance estimators in Markov chain Monte Carlo. Bernoulli, 24, 1860–1909.

16 16 Seila, A.F. (1982) Multivariate estimation in regenerative simulation. Oper. Res. Lett., 1, 153–156.

17 17 Hobert, J.P., Jones, G.L., Presnell, B., and Rosenthal, J.S. (2002) On the applicability of regenerative simulation in Markov chain Monte Carlo. Biometrika, 89, 731–743.

18 18 Geyer, C.J. (1992) Practical Markov chain Monte Carlo (with discussion). Stat. Sci., 7, 473–511.

19 19 Dai, N. and Jones, G.L. (2017) Multivariate initial sequence estimators in Markov chain Monte Carlo. J. Multivar. Anal., 159, 184–199.

20 20 Kosorok, M.R. (2000) Monte Carlo error estimation for multivariate Markov chains. Stat. Probab. Lett., 46, 85–93.

21 21 Chen, D.‐F.R. and Seila, A.F. (1987) Multivariate Inference in Stationary Simulation Using Batch Means. Proceedings of the 19th Conference on Winter simulation, pp. 302–304. ACM.

22 22 Jones, G.L., Haran, M., Caffo, B.S., and Neath, R. (2006) Fixed‐width output analysis for Markov chain Monte Carlo. J. Am. Stat. Assoc., 101, 1537–1547.

23 23 Chien, C.‐H. (1988) Small sample theory for steady state confidence intervals, in Proceedings of the Winter Simulation Conference (eds M. Abrams, P. Haigh, and J. Comfort), Association for Computing Machinery, New York, NY, USA, pp. 408–413, doi: https://doi.org/10.1145/318123.318225.

24 24 Chien, C.‐H., Goldsman, D., and Melamed, B. (1997) Large‐sample results for batch means. Manage. Sci., 43, 1288–1295.

25 25 Flegal, J.M. and Jones, G.L. (2010) Batch means and spectral variance estimators in Markov chain Monte Carlo. Ann. Stat., 38, 1034–1070.

26 26 Vats, D., Flegal, J.M., and Jones, G.L. (2019) Multivariate output analysis for Markov chain Monte Carlo. Biometrika, 106, 321–337.

27 27 Vats, D. and Flegal, J.M. (2020) Lugsail lag windows for estimating time‐average covariance matrices. arXiv preprint arXiv:1809.04541.

28 28 Liu, Y. and Flegal, J.M. (2018) Weighted batch means estimators in Markov chain Monte Carlo. Electron. J. Stat., 12, 3397–3442.

29 29 Glynn, P.W. and Whitt, W. (1992) The asymptotic validity of sequential stopping rules for stochastic simulations. Ann. Appl. Probab., 2, 180–198.

30 30 Jarner, S.F. and Hansen, E. (2000) Geometric ergodicity of Metropolis algorithms. Stoch. Proc. Appl., 85, 341–361.

31 31 Roberts, G.O. and Tweedie, R.L. (1996) Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika, 83, 95–110.

32 32 Vats, D. (2017) Geometric ergodicity of Gibbs samplers in Bayesian penalized regression models. Electron. J. Stat., 11, 4033–4064.

33 33 Khare, K. and Hobert, J.P. (2013) Geometric ergodicity of the Bayesian lasso. Electron. J. Stat., 7, 2150–2163.

34 34 Tan, A., Jones, G.L., and Hobert, J.P. (2013) On the geometric ergodicity of two‐variable Gibbs samplers, in Advances in Modern Statistical Theory and Applications: A Festschrift in Honor of Morris L. Eaton (eds G. L. Jones and X. Shen), Institute of Mathematical Statistics, Beachwood, Ohio, pp. 25–42.

35 35 Hobert, J.P. and Geyer, C.J. (1998) Geometric ergodicity of Gibbs and block Gibbs samplers for a hierarchical random effects model. J. Multivar. Anal., 67, 414–430.

36 36 Jones, G.L. and Hobert, J.P. (2004) Sufficient burn‐in for Gibbs samplers for a hierarchical random effects model. Ann. Stat., 32, 784–817.

37 37 Gupta, K. and Vats, D. (2020) Estimating Monte Carlo variance from multiple Markov chains. arXiv preprint arXiv:2007.04229.

38 38 Dawkins, B. (1991) Siobhan's problem: the coupon collector revisited. Am. Stat., 45 (1), 76–82.

39 39 Marske, D.M. (1967) BOD Data Interpretation Using the Sum of Squares Surface, University of Wisconsin, Madison.

40 40 Bates, D.M. and Watts, D.G. (1988) Nonlinear Regression Analysis and Its Applications, vol. 2, Wiley, New York.

41 41 Newton, M.A. and Raftery, A.E. (1994) Approximate Bayesian inference with the weighted likelihood bootstrap. J. R. Stat. Soc., Ser. B, 56, 3–26.

42 42 Archila, F.H.A. (2016) Markov chain Monte Carlo for linear mixed models. PhD thesis. University of Minnesota.

Computational Statistics in Data Science

Подняться наверх