Читать книгу Fundamentals and Methods of Machine and Deep Learning - Pradeep Singh - Страница 16

1.4 Linear Regression (LR)

Оглавление

LR is the simplest method of regression; it is a linear approach to model the link between a scalar response and one or more descriptive variables. Few examples of the LR algorithm are predicting the stock price, exam scores, etc. In other words, it is a statistical regression process used for predictive evaluation, mainly used to solve the regression problem in ML. Assume a model with a linear relationship among the input (x) and the single output value (y). Precisely that y can be estimated through a linear combination of input (x). The input with a single value is referred to as simple LR and input with multiple values is often referred to as multiple LR. For example, consider a linear equation which consolidates a set of (x) input variable resulting in a predicted outcome (y) for the given set of input. Hence, both the input (x) and the output value are numeric. The line equation allows one scaling factor to every input value which is called a coefficient. Another extra coefficient is added, which is often known as the intercept. To learn the LR model is to estimate the coefficient values used in the illustration of available data. Various techniques are to train the data; the most common technique used is ordinary least squares (OLS) [6]. Figure 1.1 characterizes the conspiracy between data points and LR line.

Figure 1.1 Linear regression [3].

Fundamentals and Methods of Machine and Deep Learning

Подняться наверх