Читать книгу Statistical Approaches for Hidden Variables in Ecology - Nathalie Peyrard - Страница 13
I.3. Statistical methods
ОглавлениеSome of the most common examples of statistical models featuring latent variables are described here.
Mixture models are used to define a small number of groups into which a set of observations may be sorted. In this case, the latent variables are discrete variables indicating which group each observation belongs to. Stochastic block models (SBMs) or latent block models (LBMs, or bipartite SBM) are specific forms of mixture models used in cases where the observations take the form of a network. Hidden Markov models (HMMs) are often used to analyze data collected over a period of time (such as the trajectory of an animal, observed over a series of dates) and take account of a subjacent process (such as the activity of the tracked animal: sleep, movement, hunting, etc.), which affects observations (the animal’s position or trajectory). In this case, the latent variables are discrete and represent the activity of the animal at every instant. In other models, the hidden process itself may be continuous. Mixed (generalized) linear models are one of the key tools used in ecology to describe the effects of a set of conditions (environmental or otherwise) on a population or community. These models include random effects which are, in essence, latent variables, used to account for higher than expected dispersions or dependency relationships between variables. In most cases, these latent variables are continuous and essentially instrumental in nature. Joint species distribution models (JSDMs) are a multidimensional version of generalized linear models, used to describe the composition of a community as a function of both environmental variables and of the interactions between constituent species. Many JSDMs use a multidimensionsal (e.g. Gaussian) latent variable, the dependency structure of which is used to describe inter-species interactions.
In ecology, models are often used to describe the effect of experimental conditions or environmental variables on the response or behavior of one or more species. Explanatory variables of this kind are often known as covariates. These effects are typically accounted for using a regression term, as in the case of generalized linear models. A regression term of this type may also be used in latent variable models, in which case the distribution of the response variable in question is considered to depend on both the observed covariates and non-observable latent variables.
Many methods have been proposed for estimating the parameters of a model featuring latent variables. From a frequentist perspective, the oldest and most widespread means of computing the maximum likelihood estimator is the expectation–maximization (EM) algorithm, which draws on the fact that the parameters for many of these models would be easy to estimate if the latent variables could be observed. The EM algorithm alternates between two steps: in step E, all of the quantities involving latent variables are calculated in order to update the estimation of parameters in the second step, M. Step E focuses on determining the conditional distribution of latent variables given the observed data. This calculation may be immediate (as in the case of mixture models and certain mixed models) or possible but costly (as in the case of HMMs); alternatively, it may be impossible for combinatorial or formal reasons.
The estimation problem is even more striking in the context of Bayesian inference, as a conditional distribution must be established not only for the latent variables, but also for parameters. Once again, except in very specific circumstances, precise determination of this joint conditional law (latent variables and parameters) is usually impossible.
The inference methods used in models with a non-calculable conditional law fall into two broad categories: sampling methods and approximation methods. Sampling methods use a sample of data relating to the non-calculable law to obtain precise estimations of all relevant quantities. This category includes the Monte Carlo, the Markov chain Monte Carlo (MCMC) and the sequential Monte Carlo (SMC) methods. These algorithms are inherently random, and are notably used in Bayesian inference. Methods in the second category are used to determine an approximation of the conditional law of the latent variables (and, in the Bayesian case, of parameters) based on observations. This category includes variational methods and their extensions. These approaches vary in terms of the measure of proximity between the approximated law and the actual conditional law, and in terms of the distribution family used when searching for the approximation.