Читать книгу Bayesian Risk Management - Sekerke Matt - Страница 8

Acknowledgments
Chapter 1
Models for Discontinuous Markets
Time-Invariant Models and Crisis

Оглавление

The characteristics enumerated above do not exhaust all dimensions of model risk, however. Even if a model is correctly specified and parameterized inasmuch as it produces reliable forecasts for currently observed data, the possibility remains that the model may fail to produce reliable forecasts in the future.

Two assumptions are regularly made about time series as a point of departure for their statistical modeling:

1. Assuming the joint distribution of observations in a time series depends not on their absolute position in the series but only on their relative position in this series is to assume that the time series is stationary.

2. If sample moments (time averages) taken from a time series converge in probability to the moments of the data-generating process, then the time series is ergodic.

Time series exhibiting both properties are said to be ergodic stationary. However, I find the term time-invariant more convenient. For financial time series, time-invariance implies that the means and covariances of a set of asset returns will be the same for any T observations of those returns, up to sampling error. In other words, no matter when we look at the data, we should come to the same conclusion about the joint distribution of the data, and converge to the same result as T becomes large.

Standard statistical modeling practice and classical time series analysis proceed from the underlying assumption that time series are time-invariant, or can be made time-invariant using simple transformations like detrending, differencing, or discovering a cointegrating vector (Hamilton 1994, pp. 435–450, 571). Time series models strive for time-invariance because reliable forecasts can be made for time-invariant processes. Whenever we estimate risk measures from data, we expect those measures will be useful as forecasts: Risk only exists in the future.

However, positing time-invariance for the sake of forecasting is not the same as observing time-invariance. Forecasts from time-invariant models break down because time series prove themselves not to be time-invariant. When the time-invariance properties desired in a statistical model are not found in empirical reality, unconditional time series models are no longer a possibility: Model estimates must be conditioned on recent history in order to supply reasonable forecasts, greatly foreshortening the horizon over which data can be brought to bear in a relevant way to develop such estimates.

In this book, I will pursue the hypothesis that the greatest obstacle to the progress of quantitative risk management is the assumption of time-invariance that underlies the naïve application of statistical and financial models to financial market data. A corollary of this hypothesis is that extreme observations seen in risk models are not extraordinarily unlucky realizations drawn from the extreme tail of an unconditional distribution describing the universe of possible outcomes. Instead, extreme observations are manifestations of inflexible risk models that have failed to adapt to shifts in the market data. The quest for models that are true for all time and for all eventualities actually frustrates the goal of anticipating the range of likely adverse outcomes within practical forecasting horizons.

Ergodic Stationarity in Classical Time Series Analysis

To assume a financial time series is ergodic stationary is to assume that a fixed stochastic process is generating the data. This data-generating process is a functional form combining some kind of stochastic disturbance summarized in a parametric probability distribution, with other parameters known in advance of the financial time series data being realized. The assumption of stationarity therefore implies that if we know the right functional form and the values of the parameters, we will have exhausted the possible range of outcomes for the target time series. Different realizations of the target time series are then just draws from the joint distribution of the conditioning data and the stochastic disturbance. This is why a sample drawn from any segment of the time series converges to the same result in an ergodic stationary time series. While we cannot predict where a stationary time series will go tomorrow, we can narrow down the range of possible outcomes and make statements about the relative probability of different outcomes. In particular, we can make statements about the probabilities of extreme outcomes.

Put differently, when a statistical model is specified, stationarity is introduced as an auxiliary hypothesis about the data that allows the protocols of statistical sampling to be applied when estimating the model. Stationarity implies that parameters are constant and that further observations of the data improve their estimates. Sampling-based estimation is so widely accepted and commonplace that the extra hypothesis of stationarity has dropped out of view, almost beyond criticism. Consciously or unconsciously, the hypothesis of stationarity forms a basic part of a risk manager's worldview – if one model fails, there must be another encompassing model that would capture the anomaly; some additional complication must make it possible to see what we did not see in the past.

Yet stationarity remains an assumption, and it is important to understand its function as the glue that holds together classical time series analysis. The goal in classical time series econometrics is to estimate parameters and test hypotheses about them. Assuming stationarity ensures that the estimated parameter values converge to their “correct” values as more data are observed, and tests of hypotheses about parameters are valid.

Both outcomes depend on the law of large numbers, and thus they both depend on the belief that when we observe new data, those data are sampled from the same process that generated previous data. In other words, only if we assume we are looking at a unitary underlying phenomenon can we apply the law of large numbers to ensure the validity of our estimates and hypothesis tests. Consider, for the example, the discussion of ‘Fundamental Concepts in Time-Series Analysis’ in the textbook by Fumio Hayashi (2000, pp. 97–98) concerning the ‘Need for Ergodic Stationarity’:

The fundamental problem in time-series analysis is that we can observe the realization of the process only once. For example, the sample on the U.S. annual inflation rate for the period from 1946 to 1995 is a string of 50 particular numbers, which is just one possible outcome of the underlying stochastic process for the inflation rate; if history took a different course, we would have obtained a different sample…

Of course, it is not feasible to observe many different alternative histories. But if the distribution of the inflation rate remains unchanged [my emphasis] (this property will be referred to as stationarity), the particular string of 50 numbers we do observe can be viewed as 50 different values from the same distribution.

The discussion is concluded with a statement of the ergodic theorem, which extends the law of large numbers to the domain of time series (pp. 101–102).

The assumption of stationarity is dangerous for financial risk management. It lulls us into believing that, once we have collected enough data, we have completely circumscribed the range of possible market outcomes, because tomorrow will just be another realization of the process that generated today. It fools us into believing we know the values of parameters like volatility and equity market beta sufficiently well that we can ignore any residual uncertainty from their estimation. It makes us complacent about the choice of models and functional forms because it credits hypothesis tests with undue discriminatory power. And it leads us again and again into crisis situations because it attributes too little probability to extreme events.

We cannot dismiss the use of ergodic stationarity as a mere simplifying assumption, of the sort regularly and sensibly made in order to arrive at an elegant and acceptable approximation to a more complex phenomenon. A model of a stationary time series approximates an object that can never be observed: a time series of infinite length. This says nothing about the model's ability to approximate a time series of any finite length, such as the lifetime of a trading strategy, a career, or a firm. When events deemed to occur 0.01 percent of the time by a risk model happen twice in a year, there may be no opportunity for another hundred years to prove out the assumed stationarity of the risk model.

Recalibration Does Not Overcome the Limits of a Time-Invariant Model

Modern financial crises are intimately connected with risk modeling built on the assumption of stationarity. For large actors like international banks, brokerage houses, and institutional investors, risk models matter a lot for the formation of expectations. When those models depend on the assumption of stationarity, they lose the ability to adapt to data that are inconsistent with the assumed data-generation process, because any other data-generation process is ruled out by fiat.

Consider what happens when an institution simply recalibrates the same models, without reexamining the specification of the model, over a period when economic expansion is slowing and beginning to turn toward recession. As the rate of economic growth slows the assumption of ergodicity dissolves new data signaling recession into a long-run average indicating growth. Firms and individuals making decisions based on models are therefore unable to observe the signal being sent by the data that a transition in the reality of the market is under way, even as they recalibrate their models. As a result, actors continue to behave as if growth conditions prevail, even as the market is entering a process of retrenchment.

Thinking about a series of forecasts made during this period of transition, one would likely see forecast errors consistently missing in the same direction, though no information about the forecast error would be fed back into the model. When models encompass a large set of variables, small changes in the environment can lead to sharp changes in model parameters, creating significant hedging errors when those parameters inform hedge ratios. Activity is more at odds with reality as the reversal of conditions continues, until the preponderance of new data can no longer be ignored; through successive recalibrations the weight of the new data balances and overtakes the old data. Suddenly actors are confronted by a vastly different reality as their models catch up to the new data. The result is a perception of discontinuity. The available analytics no longer support the viability of the financial institution's chosen risk profile. Management reacts to the apparent discontinuity, past decisions are abruptly reversed, and consequently market prices show extreme movements that were not previously believed to be within the realm of possibility.

Models staked on stationarity thus sow the seeds of their own destruction by encouraging poor decision making, the outcomes of which later register as a realization of the nearly-impossible. Crises are therefore less about tail events “occurring” than about model-based expectations failing to adapt. As a result, perennial efforts to capture extreme risks in stationary models as if they were simply given are, in large part, misguided. They are as much effect as they are cause. Financial firms would do much better to confront the operational task of revising risk measurements continuously, and using the outputs of that continuous learning process to control their business decisions. Relaxing the assumption of stationarity within one's risk models has the goal of enabling revisions of expectations to take place smoothly, to the extent that our expectations of financial markets are formed with the aid of models, in a way that successive recalibrations cannot.

Bayesian Risk Management

Подняться наверх