Читать книгу The Art of Mathematics in Business - Dr Jae K Shim - Страница 10

Оглавление

Part 3

Business Forecasting Methods

15. Projecting Financing Needs

Introduction

Forecasts of the future sales and their related expenses provide the firm with the information needed to project its future needs for financing. Percentage of sales is the most widely used method for projecting a company’s financing needs. This method involves estimating the various expenses, assets, and liabilities for a future period as a percent of the sales forecast and then using these percentages, together with the projected sales, to construct pro-forma balance sheets.

How is it computed?

The basic steps involved in projecting financing needs are as follows:

1.Project the first firm’s sales. The sales forecast is the most important initial step. Most other forecasts (budgets) follow the sales forecast.

2.Project additional variables such as expenses.

3.Estimate the level of investment in current and fixed assets required to support the projected sales.

4.Calculate the firm’s financing needs.

The following example illustrates ,how to develop a pro-forma balance sheet and determine the amount of external financing needed.

Example

Assume that sales for 20×7=$20, projected sales for 20×8 = $24, net income = 5 percent of sales, and the dividend payout ratio = 40 percent. The steps for the computations are outlined as follows, with the results shown in Exhibit 15.1:

Step 1: Express those balance sheet items that vary directly with sales as a percentage of sales. Any item such as long term debt that does not vary directly with sales is designed “n.a.,” or “not applicable.”

Step 2: Multiply these percentages by the 20×8 projected sales = $24 to obtain the projected amounts as shown in the last column.

Step 3: Insert figures for long-term debt, common stock, and paid-in-capital from the 20×1 balance sheet.

Step 4: Compute 20×8 retained earnings as shown in Note b.

Step 5: Sum the asset accounts, obtaining total projected assets of 47.2, and also add projected liabilities and equity to obtain $7.12, the total financing provided. Since there is a short fall of $0.08 “external financing needed.” Any external financing needed may be raised by issuing notes payable, bonds, stocks, or any combination of these financing sources.

Figure 15.1: Pro Forma Balance Sheet in Millions of Dollars


20×8 retained earnings = 20×7 retained earnings + projected net income − cash dividends paid = $1.2 + 5%($24) - 40%[5%($24)] = $1.2 + $1.2 - $0.48 = $2.4 - $0.48 = $1.92

External financing needed projected total assets - (projected total liabilities + projected equity) = $7.2 - ($4.9 + $2.22) = $7.2 - $7.12 = $0.08

How is it used and applied?

Financial officers and business owners need to determine the portion of the next year’s funding requirements that has to be raised externally. By doing so, they can get a head start in arranging a least-cost financing plan.

The major advantage of the percent-of-sales method of financial forecasting is that it is simple and inexpensive to use. To obtain a more precise projection of the firm’s future financing needs, however, the preparation of a cash budget is required. One important assumption behind the use of the method is that the firm is operating at full capacity. This means that the business has no production capacity to absorb a projected increase in sales, and thus requires an additional investment in assets.

16. Naive Forecasting Models

Introduction

Naive forecasting models are based exclusively on historical observation of sales or other variables such as earnings and cash flows being forecast. They do not attempt to explain the underlying causal relationships that produce the variables being forecast. Naive models may be classified into two groups. One group consists of simple projection models. These models require inputs of data from recent observations, but no statistical analysis is performed. The second group is comprised of models that, while naive, are complex enough to require a computer. Traditional methods such as classical decomposition, moving average, and exponential smoothing models are some examples. (See Sec. 17, Moving Averages, and Sec. 18, Exponential Smoothing.)

The advantages of naive forecasting models are that they are inexpensive to develop, store data, and operate. The disadvantages are that they do not consider possible causal relationships that underlie the forecasted variable.

How is it computed?

A simple example of a naive model type is to use the actual sales of the current period as the forecast for the next period. Let us use F as the forecast value and the symbol At as the actual value. Then:

F = At

If trends are considered, then:

F = At + (At − At-1)

This model adds the latest observed absolute period-to-period change to the most recent observed level of the variable.

If it is desirable to incorporate the rate of change rather than the absolute amount, then:


Example

Consider the following monthly sales data for 20×7:

MonthMonthly sales of product
1$5,504
25,810
36,100

Forecasts will be developed for the fourth month of 20×7, using the three models:

F = At = $6, 100

F = At + (At − At-1) = $6, 100 + ($6, 100 − $5,810) = $6, 100 + $290 = $6,390


How is it used and applied?

Naive models can be applied, with very little need of a computer, to develop forecasts for sales, earnings, and cash flows. These models, however, must be used in conjunction with more complex naive models such as classical decomposition and exponential smoothing and more sophisticated models such as regression analysis. The object is to pick the model (or models) that will best forecast performance.

17. Moving Averages

Introduction

A moving average is an average that is updated as new information is received. A manger employs the most recent observations to calculate an average, which is used as the forecast for the next period.

How is it computed?

For a moving average, simply take the most recent observations and calculate an average. Moving averages are updated continually as new data are available.

Example

Assume that the manager of Drake Hardware Store has the following sales data:

MonthSales (000)
April20
May21
June24
July22
August26
September25

Using a 5-month average, predicted sales for October are computed as follows:


How is it used and applied?

Moving averages are used, for example, to project future sales. Once sales are projected, needed financing for production and inventory may be planned. Business owners can choose the number of periods to use on the basis of the relative importance attached to old versus current date.

For example, one can compare two possibilities, a 5-month and a 3-month period. In terms of the relative importance of new versus old data, the old data receive a weight of ⅘ and current data ⅕. In the second possibility, the old data receive a weight of ⅔, while current observation receives a ⅓ weight. This is a special case of the exponential smoothing method, in which a smoothing constant is in effect the weight given to the most recent data. See Sec. 18, Exponential Smoothing.

Sales forecast can be fairly accurate if the right number of observations to be averaged is picked. In order to pick the right number, the business manager may have to experiment with different moving-average periods. Measures of forecasting accuracy, such as the mean absolute deviation (MAD), can be used to pick the optimal number of periods. See Sec, 24, Measuring Accuracy of Forecasts.

18. Exponential Smoothing

Introduction

Exponential smoothing is a popular technique for short run business forecasting. It uses a weighted average of past data as the basis for a forecast. The procedure gives heaviest weight to recent information and smaller weights to observations in the more distant past. The reason for this is that the future is more dependent on the recent past than on the distant past.

How is it computed?

(The formula for exponential smoothing is

ŷt + 1 = αγt + (1 − γ)ŷt

or

ŷnew = αyold + (1 − γ)ŷold

whereŷnew= exponentially smoothed average to be used as the forecast
yold= most recent actual data
ŷold= most recent smoothed forecast
αold= smoothing constant

The higher the α, the greater is the weight given to the more recent information.

Example

The following data on sales are given for an appliance business:

Time period, t Actual sales (1000), yt
1 $60.00
2 64.0
3 58.0
4 66.0
5 70.0
6 60.0
7 70.0
8 74.0
9 62.0
10 74.0
11 68.0
12 66.0
13 60.0
14 66.0
15 62.0

To initialize the exponential smoothing process, it is necessary to have the initial forecast. The first smoothed forecast to be used can be

1.First actual observations.

2.An average of the actual data for a few periods

For illustrative purposes, let us use a six-period average as the initial forecast y, with a smoothing constant of α = 0.40. Then


Note that y7 = 70. Then ŷ8 is computed as follows:

ŷ8=αy7 + (1 − α)ŷ7
=(0.40)(70) + (0.60)(63)
=28.0 + 37.80 = 65.80

Similarly,

ŷ9=αy8 + (1 − α)ŷ8
=(0.40)(74) + (0.60)(65.80)
=29.60 + 39.48 = 69.08

and

ŷ10=αy9 + (1 − α)ŷ9
=(0.40)(62) + (0.69)(69.08)
=24.0 + 41.45 = 66.25

By using the same procedure, the values of ŷ11, ŷ12, ŷ13, ŷ14 and ŷ15 can be calculated. Table 18.1 shows a comparison between the actual sales and predicted sales using the exponential smoothing method.

Table 18.1: Comparison of Actual Sales and Predicted Sales


Because of the negative and positive differences between actual sales and predicted sales, the forecaster can use a higher or lower smoothing constant, α, in order to adjust the prediction as quickly as possible to large fluctuations in the data aeries. For example, if the forecast is slow in reacting to increased sales (that is, if the difference is negative), the forecaster may want to try a higher value of α. For practical purposes, the optimal a may be picked by minimizing the mean squared error (MSE), defined as:


where i = the number of observations used to determine the initial forecast

In our example, i = 6, so the mean squared error is


The idea is to select the a that minimizes MSE, which is the average sum of the variations between the historical sales data and the forecast values for, the corresponding periods.

Can a Computer Help?

A manager will sometimes be confronted with complex problems requiring large sample data, necessitating trial of many different values of a for exponential smoothing. Excel has a routine for exponential smoothing.

How is it used and applied?

The exponential smoothing method is effective when there is randomness but no seasonal fluctuations in the data. The forecaster can use a higher or lower smoothing constant a in order to adjust the prediction as quickly as possible to large fluctuations it the data series. For example, if the forecast is slow in reacting to increases sales (that is, if the difference is negative), the forecaster may want to try a higher value. Finding the best a is the key to success in using this method.

The method is simple and effective, since it does not require a lot of data other than for the variable involved. One disadvantage of the method, however, is that it does not include industrial or economic factor such as market conditions, prices, or the effects of competitors’ actions.

19. Regression Analysis

Introduction

Regression analysis is a statistical procedure for estimating mathematically the average relationship between the dependent variable and the independent variable(s). The least-squares method is widely used in regression analysis for estimating the parameter values in a regression equation. Simple regression involves one independent variable, price or advertising in a demand function, whereas multiple regression involves two or more variables, that is price and advertising together.

How is it computed?

We will assume a simple (linear) regression to illustrate the least-squares method, which means that we will assume the Y = a + bX relationship, where a = intercept and b = slope. The regression method includes all the observed data and attempts to find a line of best fit. To find this line, a technique called the least-squares method is used.

The Least-Squares Method

To explain the least-squares method, we define the error as the difference between the observed value and the estimated one and denote it with u. Symbolically,

u = Y - Y′

whereY = observed value of the dependent variable

Y′ = estimated value based on Y′ = a + bX

The least-squares criterion requires that the line of best fit be such that the sum of the squares of the errors (or the vertical distance in Figure 19.1 from the observed data points to the line) is a minimum, i.e.,

Minimum: Σu2 = ∑(y − y′)2= Σ(y-a-bX)2

Using differential calculus we obtain the following equations, called normal equations:

ΣY = na + bΣX

ΣXY = aΣX + bΣX2

Solving the equations for b and a yields


Figure 19.1: Y AND Y¢


Example 1

To illustrate the computations of b and a, we will refer to the data in Table 19.1. All the sums required are computed and shown in Table 19.1.

Table 19.1: Computed Sums


From the table:

ΣX = 174; ΣY = 225; ΣXY = 3,414; ΣX2 = 2,792.

= Σx/n = 174/12 = 14.5;Y = ΣY/n = 225/12 = 18.75.

Substituting these values into the formula for b first:


a = - b = 18.75 - (0.5632)(14.5) = 18.75 - 8.1664 = 10.5836

Thus,

Y′ = 10.5836 + 0.5632 X

Can a computer help?

Spreadsheet programs such as Excel include a regression routine which can be used without any difficulty. As a matter of fact, in reality, you do not compute the parameter values a and b manually. Table 19.2 shows an Excel regression output that contains the statistics we discussed so far. Other statistics that appear are discussed in Sec. 20, Regression Statistics.

Table 19.2: Excel regression output



(1) R-squared (r2) = .608373 = 60.84%

(2) Standard error of the estimate (Se) = 2.343622

(3) Standard error of the coefficient (Sb) = 0.142893

(4) t-value = 3.94

Note that all of the above are the same as the ones manually obtained.

How is it used and applied?

Before attempting a least-squares regression approach, it is extremely important to plot the observed data on a diagram, called the scattergraph (See Figure 19.3). The reason is that you might want to make sure that a linear (straight-line) relationship existed between Y and X in the past sample. If for any reason there was a nonlinear relationship detected in the sample, the linear relationship we assumed -- Y = a + bX -- would not give us a good fit.

Example 2

Assume that the advertising of $10 is to be expended for next year; the projected sales for the next year would be computed as follows:

Y′ = 10.5836 + 0.5632 X

= 10.5836 + 0.5632 (10)

= $ 16.2156

In order to obtain a good fit and achieve a high degree of accuracy, you should be familiar with statistics relating to regression such as r-squared (R2) and t-value, which are discussed later.

Figure 19.3: Scatter diagram


20. Regression Statistics

Introduction

Regression analysis is a statistical procedure for estimating mathematically the average relationship between the dependent variable (e.g., sales) and the independent variable(s) (e.g., price, advertising, or both). It uses a variety of statistics to convey the accuracy and reliability of the regression results.

How is it computed?

Regression statistics include:

1.Correlation coefficient (r) and coefficient of determination (r2)

2.Standard error of the estimate (Se)

3.Standard error of the regression coefficient (Sb) and t statistics

1. Correlation Coefficient (r) and Coefficient of Determination (r2)

The correlation coefficient r measures the degree of correlation between Y and X. The range of values it takes on is between - 1 and + 1. More widely used, however, is the coefficient of determination, designated r2 (read as r-squared). Simply put, r2 tells the level of quality of the estimated regression equation--a measure of “goodness of fit” in the regression. Therefore, the higher the r2 , the more confidence can be placed in the estimated equation.

More specifically, the coefficient of determination represents the proportion of the total variation in Y that is explained by the regression equation. It has the range of values between 0 and 1.

Example 1

The statement “Sales is a function of advertising dollars with r2 = 70 percent,” can be interpreted as “70 percent of the total variation of factory overhead is explained by regression equation or the change in advertising and the remaining 30 percent is accounted for by something other than advertising.”

The coefficient of determination is computed as


Where Y = actual values

Y′ = estimated values

= average (mean) value of Y

In a simple regression, however, there is a shortcut method available:


where n = number of observations

X = value of independent value

Example 2

To illustrate the computations of various regression statistics, use the same data used in Sec. 19, Regression Analysis. All the sums required are computed and shown below. Note that the Y2 column is added in Table 20.1 to be used for r2.

Table 20.1: Computed Sums


From this table,

∑X = 174 ∑Y = 225 ∑XY = 3414 ∑X2 = 2792


Using the shortcut method for r2,


This means that about 60.84 percent of the total variation in total sales is explained by advertising and the remaining 39.16 percent is still unexplained. A relatively low r2 indicates that there is a lot of room for improvement in the forecasting equation (Y2 = $10.5836 + $0.5632X). Advertising or a combination of price and advertising might improve r2.

Note: A low r2 is an indication that the model is inadequate for explaining the y variable. The general causes for this problem are:

1.Use of a wrong functional form.

2.Poor choice of an x variable as the predictor.

3.Omission of some important variable or variables from the model.

2. Standard Error of the Estimate (Se)

The standard error of the estimate, designated Se, is defined as the standard deviation of the regression. It is computed as


Statistics can be used to gain some idea of accuracy of these predictions.

Since, t = 3.94 > 2, we conclude that the b coefficient is statistically significant. As was indicated previously, the table’s critical value (cut-off value) for 10 degrees of freedom is 2.228 (from Table 8 in the Appendix).

Rule of thumb: Any t value greater than +2 or less than 2 is acceptable. The higher the t value, the greater the confidence we have in the coefficient as a predictor. Low t values are indications of low reliability of the predictive power of that coefficient.

Example 3

Returning to our example data, Se is calculated as


Suppose you wish to make a prediction regarding an individual Y value--such as a prediction about the sales when an advertising expense = $10. Usually, we would like to have some objective measure of the confidence we can place in our prediction, and one such measure is a confidence (or prediction) interval constructed for Y.

Note: t is the critical value for the level of significance employed. For example, for a significant level of 0.025 (which is equivalent to a 95% confidence level in a two-tailed test), the critical value of t for 10 degrees of freedom is 2.228 (See Table A.2 in the Appendix). As can be seen, the confidence interval is the linear distance bounded by limits on either side of the prediction.

Example 4

If you want to have a 95 percent confidence interval of your prediction, the range for the prediction, given an advertising expense of $10 would be between $10,595.10 and $21,836.10, as determined as follows: Note that from Example 4.2, Y′ = $16.2156

The confidence interval is therefore established as follows:

$16.2156 ± (2.228)(2.3436)

= $16.2156 ± (2.228)(2.3436)

= $16.2156 ± 5.2215

which means the range for the prediction, given an advertising expense of $10 would be between $10.5951 and $21.8361. Note that $10.9941 = $16.2156 - 5.2215 and $21.4371 =$16.2156 + 5.2215.

3. Standard Error of the Regression Coefficient (Sb) and the t Statistic

The standard error of the regression coefficient, designated s, and the t statistic are closely related. Sb is calculated as:


or, in short-cut form,


Sb gives an estimate of the range where the true coefficient will “actually” fall.

The t statistics (or t value) is a measure of the statistical significance of an independent variable X in explaining the dependent variable Y. It is determined by dividing the estimated regression coefficient b by its standard error Sb It is then compared with the table t value (see Table 7 in the appendix). Thus, the t statistic measures how many standard errors the coefficient is away from zero. Low t values are indicators of low reliability of that coefficient.

Example 5

The Sb for our example is:


Since t - 3.94 > 2, the conclusion is that the b coefficient is statistically significant.

How is it used and applied?

The least-squares method is used to estimate both simple and multiple regressions, although in reality managers will confront multiple regression more often than simple regression. Computer software is used to estimate b’s. A spreadsheet program such as Excel can be used to develop a model and estimate most of the regression statistics discussed thus far. Table 20.1 shows the relevant statistics.

Regression analysis is a powerful statistical technique that is widely used by businesspersons and economists. In order to obtain a good fit and to achieve a high degree of accuracy, analysts must be familiar with statistics relating to regression, such as r2 and the t value, and be able to make further tests that are unique to multiple regression.

See also Sec. 19, Regression Analysis; Sec. 21, Simple Regression.

Table 20.1: Excel regression output


(1)R-squared (r2) = .608373 = 60.84%

(2)Standard error of the estimate (Se) = 2.343622

(3)Standard error of the coefficient (Sb) = 0.142893

(4)t-value = 3.94

Note that all of the above are the same as the ones manually obtained. Note the following:

(1)t-statistic is more relevant to multiple regressions which have more than one b’s.

(2)r2 tells you how good the forest (overall fit) is while t-statistic tells you how good an individual tree (an independent variable) is.

In summary, the table t value, based on a degree of freedom and a level of significance, is used:

1.To set the prediction range—upper and lower limits—for the predicted value of the dependent variable.

2.To set the confidence range for regression coefficients.

3.As a cutoff value for the t-test.

21. Simple Regression

Introduction

Simple regression is a type of regression analysis that involves one independent (explanatory) variable.

How is it computed?

Simple regression takes the form:

Y = a + bX

Where Y = dependent variable

X = independent variable

a = constant

b = slope

The least-squares estimation method is typically used to estimate the parameter values a and b.

Example

Assume that data on DVD sales and advertising expenditures have been collected over the past seven periods. The linear regression equation can be estimated using the least-squares method. For example, the sales/advertising regression for DVDs is estimated to be:

DVD sales = Y = 19.88 + 4.72X

r2 = 0.7630 = 76.30%

How is it used and applied?

Other applications of simple regression are:

• Total manufacturing costs is explained by only one activity variable (such as either production volume or machine hours), i.e., TC = a + b Q.

• A security’s return is a function of the return on a market portfolio (such as Standard & Poor’s 500), i.e., rj = a +βrm where β = beta, a measure of uncontrollable risk.

• Consumption is a function of disposable income, i.e., C = a + b Yd where b = marginal propensity to consume.

• Demand is a function of price, i.e., D = a - bP.

• Average time to be taken is a function of cumulative production, i.e., Y = a X-bwhere b represents a learning rate in the learning curve phenomenon.

• Trend analysis that attempts to detect a growing or declining trend of time series data, i.e., Y = a + bt where t = time.

22. Trend Equation

Introduction

Trends are the general upward or downward movements of the average over time. These movements may require many years of data to determine or describe them. The basic forces underlying trends include technological advances, productivity changes, inflation, and population change.

How is it computed?

The trend equation is a common method for forecasting sales or earnings. It involves a regression whereby a trend line is fitted to a time series of data.

The linear trend line equation can be shown as

Y = a + bX

The formulas for the coefficients a and b essentially the same as for simple regression. They are estimated using the least-squares method, which was discussed in Sec. 19, Regression Analysis.

However, for regression purposes, a time period can be given a number so that ΣX = 0. When there is an odd number of periods, the period in the middle is assigned a value of 0. If there is an even number, then − 1 and + 1 are assigned the two periods in the middle, so that again ∑X = 0.

With ∑X = 0, the formula for b and a reduces to


Example 1

This example demonstrates the use of trend equations in cases which an even number and an odd number of periods occur.

Case 1 (odd number):


Case 1 (odd number):


Example 2

YearSales (in millions)
20×1$ 10
20×212
20×313
20×416
20×517

Since the company has five years of data, which is an odd number, the year in the middle is assigned a zero value.



Therefore, the estimated trend equation is

Y′ = $13.6 + $1.8 t

To project 20×6 sales, we assign +3 to the t value for the year 20×6.

Y’ = $13.6 + $1.8 (3)

= $19

How is it used and applied?

Managers use the trend equation for forecasting purposes, such as to project future revenue costs. They should use the trend equation, however, only if the time series data reflect the gradual shifting or growth pattern over time.

See also Sec. 23, Decomposition of Time Series.

23. Decomposition of Time Series

Introduction

When sales exhibit seasonal or cyclical fluctuation, we use a forecasting method called classical decomposition to deal with seasonal trend, and cyclical components together.

How is it computed?

We assume that a time series (Yt) is combined into a model that consists of the four components--trend (T), cyclical, (C), seasonal (S), and random (R). This model is of a multiplicative type i.e.,

Yt = T × C × S × R

The classical decomposition method is illustrated step by step, by working with the quarterly sales data. The approach basically requires the following four steps:

1.Determine seasonal indexes, using a four-quarter moving average.

2.Deseasonalize the data.

3.Develop the linear least-squares equation in order to identify the trend component of the forecast.

4.Forecast the sales for each of the four quarters of the coming year.

Example

We illustrate the classical decomposition approach by working with the quarterly sales data presented in Table 23.1 and Figure 23.1 These data show sales of DVD (in thousands of units) for a particular maker over the past four years.

We begin our analysis by showing how to identify the seasonal component of the time series. Looking at Figure 23.1, we can easily see a seasonal pattern of DVD sales. Specifically, we observe that sales are lower in the second quarter of each year, followed by higher sales in quarters 3 and 4. The computational procedure used to eliminate the seasonal component is explained below, step by step.

Step 1. We use a moving average to measure the combined trend of cyclical (TC) components of the time series. This way we eliminate the seasonal and random components, S and R.

More specifically, Step 1 involves the following sequences of steps:

Table 23.1: Quarterly Sales Data for DVDs over the Past 4 Years

YearQuarterSales
115.8
25.1
37.0
47.5
216.8
26.2
37.8
48.4
317.0
16.6
38.5
48.8
417.3
26.9
39.0
49.4

Figure 23.1: Quarterly DVD sales time series


a)Calculate the 4-quarter moving average for the time series, which we discussed in the above. However, the moving average values computed do not correspond directly to the original quarters of the time series.

b)We resolve this difficulty by using the midpoints between successive moving-average values. For example, since 6.35 corresponds to the first half of quarter 3 and 6.6 corresponds to the last half of quarter 3, we use (6.35 + 6.6)/2 = 6.475 as the moving average value of quarter 3. Similarly, we associate (6.6+6.875)/2 = 6.7375 with quarter 4. A complete summary of the moving-average calculation is shown in Table 23.2.

Table 23.2: Moving Average Calculations for the DVD Sales Time Series



c)Next, we calculate the ratio of the actual value to the moving average value for each quarter in the time series having a 4-quarter moving average entry. This ratio in effect represents the seasonal-random component, SR=Y/TC. The ratios calculated this way appear in Table 23.3.

Table 23.3: Seasonal Random Factors for the Series


d)Arrange the ratios by quarter and then calculate the average ratio by quarter in order to eliminate the random influence.

For example, for quarter 1

(0.975 + 0.929 + 0.920)/3 = 0.941

e)The final step, shown below, adjusts the average ratio slightly (for example, for quarter 1, 0.941 becomes 0.940), which will be the seasonal index, as shown in Table 23.4.

Table 23.4: Seasonal Component Calculations


Step 2: After obtaining the seasonal index, we must first remove the effect of season from the original time series. This process is referred to as deseasonalizing the time series. For this, we must divide the original series by the seasonal index for that quarter. This is shown in Table 23.5 and graphed in Fig. 23.2.

Step 3: Looking at the graph in Figure 23.2, we see the time series seem to have an upward linear trend. To identify this trend, we develop the least squares trend equation. This procedure is also shown in Table 23.5.

Figure 23.2: Quarterly DVD sales time series − original versus deseasonalized


Table 23.5: Deseasonalized Data



which means y = 6.1147 + 0.1469 t for the forecast periods:

t=17
18
19
20

Table 23.6: Quarter-To-Quarter Sales Forecasts for Year 5


Note: (a) y = 6.1147 + 0.1469 t = 6.1147 + 0.1469 (17) = 8.6128

Step 4: Develop the forecast using the trend equation and adjust these forecasts to account for the effect of season. The quarterly forecast, as shown in Table 6.7, can be obtained by multiplying the forecast based on trend times the seasonal factor.

How is it used and applied?

The classical decomposition model is time-series model used for forecasting. This means that the method can used only to fit the time-series data, whether it is monthly, quarterly, or annual. The types of time-series data the company deals with include earnings, cash flows, market share, and costs. As long as the time series displays the patterns of seasonality and cyclicality, the model constructed should be very effective in projecting the future variable.

24. Measuring Accuracy of Forecasts

Introduction

The performance of a forecast should be checked against its own record or against that of other forecasts. There are various statistical measures that can be used to measure performance of the model.

How is it computed?

The performance is measured in terms of forecasting error, where error is defined as the difference between a predicted value and the actual result. Error (e) = Actual (A) - Forecast (F)

There are three common measures for summarizing historical errors.

1.Mean absolute deviation (MAD) is the mean or average of the sum of the errors for a given data set taken without regard to sign. The formula for calculating MAD is:


2.Mean squared error (MSE) is a measure of accuracy computed by squaring the individual errors for each item and then finding the average value of the sum of those squares. MSE gives greater weight to large errors than to small errors, since the errors are squared before being summed. The formula for calculating MSE is:


3.Mean absolute percentage error (MAPE): Sometimes it is more useful to compute the forecasting errors in percentages rather than in amounts. The MAPE is calculated by finding the absolute error in each period, dividing this by the actual value of that period, and then averaging these absolute percentage errors, as shown below.


The following example illustrates the computation of MAD, MSE, and MAPE.

Example 1

Sales data of a microwave oven manufacturer and calculation of relevant errors are given in Table 24.1.

Table 24.1: Calculation of Errors


Using the figures,

MAD=Σ |e| / n = 22/8 = 2.75
MSE=Σ e2 /(n - 1) = 76/7 = 10.86
MSE=Σ |e|/A / n = .0524/8 = .0066

One way these measures are used is to evaluate forecasting ability of alternative forecasting methods. For example, using either MAD or MSE, a forecaster could compare the results of exponential smoothing with (see Sec. 18) and select the one that performed best in terms of the lowest MAD or MSE for a given set of data. Also, it can help select the best initial forecast value for exponential smoothing.

Figure 24.1: Monitoring forecast errors


How is it used and applied?

It is important to monitor forecast errors to insure that the forecast is performing well. If the model is performing poorly based on some criteria, the forecaster might reconsider the use of the existing model or switch to another forecasting model or technique. Forecasting control can be accomplished by comparing forecasting errors to predetermined values, or limits. Errors that fall within the limits would be judged acceptable while errors outside of the limits would signal that corrective action is desirable (See Figure 24.1).

Monitoring forecasts

Forecasts can be monitored using either tracking signals or control charts.

Tracking Signals

A tracking signal is based on the ratio of cumulative forecast error to the corresponding value of MAD.

Tracking signal = Σ(A - F) / MAD

The resulting tracking signal values are compared to predetermined limits. These are based on experience and judgment and often range from plus or minus 3 to plus or minus 8. Values within the limits suggest that the forecast is performing adequately. By the same token, when the signal goes beyond this range, corrective action is appropriate.

Example 2

Returning Example 1, the deviation and cumulative deviation have already been computed:

MAD= Σ |A - F| / n = 22 / 8 = 2.75

Tracking signal = Σ (A - F) / MAD = -2 / 2.75 = -0.73

A tracking signal is as low as - 0.73, which is substantially below the limit (-3 to -8). It would not suggest any action at this time.

Note: After an initial value of MAD has been computed, the estimate of the MAD can be continually updated using exponential smoothing.

MADt = α(A - F) + (1 - α) MADt-1

Control Charts

The control chart approach involves setting upper and lower limits for individual forecasting errors instead of cumulative errors. The limits are multiples of the estimated standard deviation of forecast, Sf, which is the square root of MSE. Frequently, control limits are set at 2 or 3 standard deviations.

± 2(or 3) Sf

Note: Plot the errors and see if all errors are within the limits, so that the forecaster can visualize the process and determine if the method being used is in control.

Example 3

For the sales data in Table 24.2, using the naive forecast, we will determine if the forecast is in control. For illustrative purposes, we will use 2 sigma control limits.

Table 24.2: Error Calculations


First, compute the standard deviation of forecast errors


Two sigma limits are then plus or minus 2(7.64) = -15.28 to +15.28

Note that the forecast error for year 3 is below the lower bound, so the forecast is not in control (See Figure 24.2). The use of other methods such as moving average, exponential smoothing, or regression might produce a better forecast.

Note: A system of monitoring forecasts needs to be developed. The computer may be programmed to print a report showing the past history when the tracking signal “trips” a limit. For example, when a type of exponential smoothing is used, the system may try a different value of α (so the forecast will be more responsive) and to continue forecasting.

Figure 24.2: Control charts for forecasting errors


25. Cost of Prediction Errors

Introduction

There is always a cost involved in a failure to predict a certain variable accurately. The cost of prediction errors associated with sales, expenses, and purchases can be significant.

How is it computed?

The cost of the prediction error is basically the contribution or profit lost because of an inaccurate prediction. It can be measured in terms of lost sales, disgruntled customers, or idle machines.

Example

Assume that a retail store has been selling a toy doll having a cost of $0.60 for $1.00 each. The fixed cost is $300. The store has no privilege to return any unsold dolls. It has predicted sales of 2,000 dolls. However, unforeseen competition has reduced sales to 1,500 dolls. The cost of its prediction error—that is, its failure to predict demand accurately, is calculated as follows:

1.Initial predicted sales = 2,000 dollars

Optimal decision: purchase 2,000 dollars

Expected net income = $500 [(2,000 dollars × $0.40 contribution) − $300 fixed cost]

2.Alternative parameter value = 1,500 dolls

Optimal decision: purchase 1500 dollars

Expected net income = $300 [(1,500 dollars × $0.40 contribution) − $300 fixed cost]

3.Results of original decision under alternative parameter value Expected net income:

Revenue (1,500 dolls × $1.00) – cost of dollars (2,000 dollars × $0.60) − $300 fixed cost = $1,500 – $1,200 – $300 = $0

4.Cost of prediction error = (2) – (3) = $300

How is it used and applied?

It is important to determine the cost of the prediction error in order to minimize the potential detrimental effect of future profitability. The cost of the prediction error can be substantial, depending on the circumstances. For example, failure to make an accurate projection of sales could result in poor production planning, too much or too little purchase of labor, and so on, thereby causing potentially huge financial losses.

Business people need to keep track of past prediction records to ensure that (1) future costs can be minimized and (2) better forecasting methods can be developed.

26. χ2 (Chi-Square Test)

Introduction

The χ2 (chi-square) test is a statistical test of the significance of a difference between classifications or sub-classifications. The test is applied to sample data in testing whether or not two qualitative population variables are independent.

How is the test performed?

The χ2 test involves three steps.

Step 1: Calculate the χ2 statistic, which is defined as:


Where f0 = individual observed frequencies of each class

fe = individual expected frequencies of each class

Step 2. Find the table value at a given level of significance (see Table 6 in the Appendix).

Step 3. If the calculated value is greater than the table value, we reject the full hypothesis, which means that the two variable are classifications are associated.

Example

Consider the following survey regarding restaurants:


The null hypothesis is: The menu and the indoor/outdoor cafes are independent.

In order to calculate χ2, we need to construct an expected value table on the basis of the assumption that menu and indoor/outdoor cafes are independent of each other. If no association exists, it is to be expected that the proportion of à la carte and prix fixe restaurants with outdoor cafes or tables will be the same as that without outdoor cafes. First, we compute the expected frequencies based on the premise of independence:

635/844 = 0.7524209/844 = 0.2476

Now, we can compute the expected values from the proportions of totals as follows:

0.7524×646=486

0.2476×646=160

0.7524×198=149

0.2476×198=049

These expected values give the following table:


Step 1. Calculate χ2. We are interested in how far the observed table differs from the expected table.


=14.171

Step 2. χ2 value at the 0.05 level of significance with one degree of freedom (from Table 6 in the Appendix) is 3.841. The degree of freedom is calculated as (no. rows − 1) × (no. of columns – 1) = (2 – 1) × (2 – 1) = 1.

Step 3. As shown in Fig. 26.1, since the calculated value is greater than the table value (14.171 > 3.841), we reject the null hypothesis, which means that the menu is associated with the outdoor/indoor restaurant setup.

Figure 26.1: Chi-square test and rejection area.


How is it used and applied?

The chi-square test has many applications in business. It is a statistical test of independence (or association), to determine if membership in categories of one variable is different as a function of membership in the categories of a second variable. It is important to note, however, that there are limitations to this test: (1) The sample must be big enough for the expected frequencies for each cell (rule of thumb: at least 5); and (2) the test does not tell anything about the direction of an association.

Managers need to know whether the differences they observe among several sample proportions are significant or due only to chance. For example, marketing managers are concerned that their brand’s share may be unevenly distributed throughout the country. They conduct a survey in which the country is divided into specific number of geographic regions and see if consumer’s decisions as to whether or not to purchase the company’s brand has anything to do with the geographic location. As another example, a financial manager might be interested in the differences in capital structure within different firm sizes in a certain industry. To see if firm sizes have the same capital structure (or firm sizes have nothing to do with the capital structure), what he or she needs to do is to survey a group of firms with assets of different amounts and divide them into groups. Each firm can be classified according to predetermined debt/equity ratio groups.

The Art of Mathematics in Business

Подняться наверх