Читать книгу The xVA Challenge - Gregory Jon - Страница 16

3
The OTC Derivatives Market
3.3 Risk management of derivatives

Оглавление

3.3.1 Value-at-risk

Financial risk management of derivatives has changed over the last two decades. One significant aspect has been the implementation of more quantitative approaches, the most significant probably being value-at-risk (VAR). Initially designed as a metric for market risk, VAR has subsequently been used across many financial areas as a means for efficiently summarising risk via a single quantity. For example, the concept of PFE (potential future exposure), when used to assess counterparty risk, is strongly related to the definition of VAR.


Figure 3.4 Illustration of the value-at-risk (VAR) concept at the 99 % confidence level. The VAR is 125, since the chance of a loss greater than this amount is no more than 1 %.


A VAR number has a simple and intuitive explanation as the worst loss over a target horizon to a certain specified confidence level. The VAR at the α% confidence level gives a value that will be exceeded with no more than a (1 – α)% probability. An example of the computation of VAR is shown in Figure 3.4. The VAR at the 99 % confidence level is –125 (i.e. a loss) since the probability that this will be exceeded is no more than 1 %. (It is actually 0.92 % due to the discrete13 nature of the distribution.) To find the VAR, one finds the minimum value that will be exceeded with the specified probability.

VAR is a very useful way in which to summarise the risk of an entire distribution in a single number that can be easily understood. It also makes no assumption as to the nature of distribution itself, such as that it is a Gaussian.14 It is, however, open to problems of misinterpretation since VAR says nothing at all about what lies beyond the defined (1 % in the above example) threshold. To illustrate this, Figure 3.5 shows a slightly different distribution with the same VAR. In this case, the probability of losing 250 is 1 % and hence the 99 % VAR is indeed 125 (since there is zero probability of other losses in-between). We can see that changing the loss of 250 does not change the VAR since it is only the probability of this loss that is relevant. Hence, VAR does not give an indication of the possible loss outside the confidence level chosen. Over-reliance upon VAR numbers can be counterproductive as it may lead to false confidence.


Figure 3.5 Distribution with the same VAR as Figure 3.4.


Another problem with VAR is that it is not a coherent risk measure (Artzner et al., 1999), which basically means that in certain (possibly rare) situations it can exhibit non-intuitive properties. The most obvious of these is that VAR may not behave in a sub-additive fashion. Sub-additivity requires a combination of two portfolios to have no more risk than the sum of their individual risks (due to diversification).

A slight modification of the VAR metric is commonly known as expected shortfall (ES). Its definition is the average loss equal to or above the level defined by VAR. Equivalently, it is the average loss knowing that the loss is at least equal to the VAR. ES does not have quite as intuitive an explanation as VAR, but has more desirable properties such as not completely ignoring the impact of large losses (the ES in Figure 3.5 is indeed greater than that in Figure 3.4) due to being a coherent risk measure. For these reasons, The Fundamental Review of the Trading Book (BCBS, 2013) has suggested that banks use ES rather than VAR for measuring their market risk (this may eventually also apply to the calculation of CVA capital, as discussed in Section 8.7).

The most common implementation of VAR and ES approaches is using historical simulation. This takes a period (usually several years) of historical data containing risk factor behaviour across the entire portfolio in question. It then resimulates over many periods how the current portfolio would behave when subjected to the same historical evolution. For example, if four years of data were used, then it would be possible to compute around 1,000 different scenarios of daily movements for the portfolio. If a longer time horizon is of interest, then quite commonly the one-day result is simply extended using the “square root of time rule”. For example, in market risk VAR models used by banks, regulators allow the ten-day VAR to be defined as √10=3.14 multiplied by the one-day VAR. VAR models can also be “backtested” to check their predictive performance empirically. Backtesting involves performing an ex-post comparison of actual outcomes with those predicted by the model. VAR lends itself well to backtesting since a 99 % number should be exceeded once every hundred observations.

It is important to note that the use of historical simulation and backtesting are relatively straightforward to apply for VAR and ES due to the short time horizon (ten days) involved. For counterparty risk assessment (and xVA in general), much longer time horizons are involved and quantification is therefore much more of a challenge.

3.3.2 Models

The use of metrics such as VAR relies on quantitative models in order to derive the distribution of returns from which such metrics can be calculated. The use of such models facilitates combining many complex market characteristics such as volatility and dependence into one or more simple numbers that can represent risk. Models can compare different trades and quantify which is better, at least according to certain predefined metrics. All of these things can be done in minutes or even seconds to allow institutions to make fast decisions in rapidly moving financial markets.

However, the financial markets have something of a love/hate relationship with mathematical models. In good times, models tend to be regarded as invaluable, facilitating the growth in complex derivatives products and dynamic approaches to risk management adopted by many large financial institutions. The danger is that models tend to be viewed either as “good” or “bad” depending on the underlying market conditions. Whereas, in reality, models can be good or bad depending on how they are used. An excellent description of the intricate relationship between models and financial markets can be found in MacKenzie (2006).

The modelling of counterparty risk is an inevitable requirement for financial institutions and regulators. This can be extremely useful and measures such as PFE, the counterparty risk analogue of VAR, are important components of counterparty risk management. However, like VAR, the quantitative modelling of counterparty risk is complex and prone to misinterpretation and misuse. Furthermore, unlike VAR, counterparty risk involves looking years into the future rather than just a few days, which creates further complexity not to be underestimated. Not surprisingly, regulatory requirements over backtesting of counterparty risk models15 have been introduced to assess performance. In addition, a greater emphasis has been placed on stress testing of counterparty risk, to highlight risks in excess of those defined by models. Methods to calculate xVA are, in general, under increasing scrutiny.

3.3.3 Correlation and dependency

Probably the most difficult aspect in understanding and quantifying financial risk is that of co-dependency between different financial variables. It is well known that historically estimated correlations may not be a good representation of future behaviour. This is especially true in a more volatile market environment, or crisis, where correlations have a tendency to become very large. Furthermore, the very notion of correlation (as used in financial markets) may be heavily restrictive in terms of its specification of co-dependency.

Counterparty risk takes difficulties with correlation to another level, for example compared to traditional VAR models. Firstly, correlations are inherently unstable and can change significantly over time. This is important for counterparty risk assessment, which must be made over many years, compared with market risk VAR, which is measured over just a single day. Secondly, correlation (as it is generally defined in financial applications) is not the only way to represent dependency, and other statistical measures are possible. Particularly in the case of wrong-way risk (Chapter 19), the treatment of co-dependencies via measures other than correlation is important. In general, xVA calculations require a careful assessment of the co-dependencies between credit risk, market risk, funding and collateral aspects.

13

For a continuous distribution, VAR is simply a quantile. (A quantile gives a value on a probability distribution where a given fraction of the probability falls below that level.)

14

Certain implementations of a VAR model (notably the so-called variance-covariance approach) may make normal (Gaussian) distribution assumptions, but these are done for reasons of simplification and the VAR idea itself does not require them.

15

Under the Basel III regulations.

The xVA Challenge

Подняться наверх