Читать книгу Handbook of Regression Analysis With Applications in R - Samprit Chatterjee - Страница 29

2.1 Introduction

Оглавление

All of the discussion in Chapter 1 is based on the premise that the only model being considered is the one currently being fit. This is not a good data analysis strategy, for several reasons.

1 Including unnecessary predictors in the model (what is sometimes called overfitting) complicates descriptions of the process. Using such models tends to lead to poorer predictions because of the additional unnecessary noise. Further, a more complex representation of the true regression relationship is less likely to remain stable enough to be useful for future prediction than is a simpler one.

2 Omitting important effects (underfitting) reduces predictive power, biases estimates of effects for included predictors, and results in less understanding of the process being studied.

3 Violations of assumptions should be addressed, so that least squares estimation is justified.

The last of these reasons is the subject of later chapters, while the first two are discussed in this chapter. This operation of choosing among different candidate models so as to avoid overfitting and underfitting is called model selection.

First, we discuss the uses of hypothesis testing for model selection. Various hypothesis tests address relevant model selection questions, but there are also reasons why they are not sufficient for these purposes. Part of these difficulties is the effect of correlations among the predictors, and the situation of high correlation among the predictors (collinearity) is a particularly challenging one.

A useful way of thinking about the tradeoffs of overfitting versus underfitting is as a contrast between strength of fit and simplicity. The principle of parsimony states that a model should be as simple as possible while still accounting for the important relationships in the data. Thus, a sensible way of comparing models is using measures that explicitly reflect this tradeoff; such measures are discussed in Section 2.3.1.

The chapter concludes with a discussion of techniques designed to address the existence of well‐defined subgroups in the data. In this situation, it is often the case that the effects of a predictor on the target variable is different in the two groups, and ways of building models to handle this are discussed in Section 2.4.

Handbook of Regression Analysis With Applications in R

Подняться наверх