Читать книгу Introduction to Linear Regression Analysis - Douglas C. Montgomery - Страница 52

2.12 ESTIMATION BY MAXIMUM LIKELIHOOD

Оглавление

The method of least squares can be used to estimate the parameters in a linear regression model regardless of the form of the distribution of the errors ε. Least squares produces best linear unbiased estimators of β0 and β1. Other statistical procedures, such as hypothesis testing and CI construction, assume that the errors are normally distributed. If the form of the distribution of the errors is known, an alternative method of parameter estimation, the method of maximum likelihood, can be used.

Consider the data (yi, xi), i = 1, 2, …, n. If we assume that the errors in the regression model are NID(0, σ2), then the observations yi in this sample are normally and independently distributed random variables with mean β0 + β1xi and variance σ2. The likelihood function is found from the joint distribution of the observations. If we consider this joint distribution with the observations given and the parameters β0, β1, and σ2 unknown constants, we have the likelihood function. For the simple linear regression model with normal errors, the likelihood function is

(2.56)

The maximum-likelihood estimators are the parameter values, say , , and , that maximize L, or equivalently, ln L. Thus,

(2.57)

and the maximum-likelihood estimators , , and must satisfy

(2.58a)

(2.58b)

and

(2.58c)

The solution to Eq. (2.58) gives the maximum-likelihood estimators:

(2.59a)

(2.59b)

(2.59c)

Notice that the maximum-likelihood estimators of the intercept and slope, and , are identical to the least-squares estimators of these parameters. Also, is a biased estimator of σ2. The biased estimator is related to the unbiased estimator [Eq. (2.19)] by . The bias is small if n is moderately large. Generally the unbiased estimator is used.

In general, maximum-likelihood estimators have better statistical properties than least-squares estimators. The maximum-likelihood estimators are unbiased (including , which is asymptotically unbiased, or unbiased as n becomes large) and have minimum variance when compared to all other unbiased estimators. They are also consistent estimators (consistency is a large-sample property indicating that the estimators differ from the true parameter value by a very small amount as n becomes large), and they are a set of sufficient statistics (this implies that the estimators contain all of the “information” in the original sample of size n). On the other hand, maximum-likelihood estimation requires more stringent statistical assumptions than the least-squares estimators. The least-squares estimators require only second-moment assumptions (assumptions about the expected value, the variances, and the covariances among the random errors). The maximum-likelihood estimators require a full distributional assumption, in this case that the random errors follow a normal distribution with the same second moments as required for the least- squares estimates. For more information on maximum-likelihood estimation in regression models, see Graybill [1961, 1976], Myers [1990], Searle [1971], and Seber [1977].

Introduction to Linear Regression Analysis

Подняться наверх