Читать книгу Modern Computational Finance - Antoine Savine - Страница 11
1.3 SIMULATION MODELS
ОглавлениеIn a strict valuation context, the computation of the value and risk sensitivities of a script requires a model. Models are the primary focus of mathematical finance and are discussed in a vast number of textbooks, publications, and lectures. The clearest and most comprehensive description of the design and implementation of derivative models can be found in the three volumes of [20].
This publication is not about models. We develop a model agnostic scripting library that is written independently of models and designed to communicate with all models that satisfy an API discussed in section 3.7 (where we show that the API can also be added to some existing models with an adapter class). For our purpose, a model is anything that generates scenarios, also called paths in the context of Monte‐Carlo simulations, that is, the value of the relevant simulated variables (essentially, underlying asset prices) on the relevant event dates (when something meaningful is meant to happen for the transaction, like a fixing, a payment, an exercise, or the update of a path dependency). Our library consumes scenarios to evaluate payoffs, without concern of how those scenarios were produced.
Scripting libraries are also typically written to be usable with a number of numerical implementations, either by forward induction like Monte‐Carlo simulations or backward induction like finite difference grids. In order to avoid unnecessary confusion, our book focuses on forward induction with Monte‐Carlo simulations, by far the most frequently used valuation context today. To further simplify our approach, we only consider path‐wise simulations. This means that simulations are provided one path (for all event dates) at a time. The model is, to us, an abstract object that generates multiple scenarios (possible evolutions of the world) and communicates them sequentially for evaluation.
An alternative that we don't consider is step‐wise simulation, where all paths are computed simultaneously, date by date. The model first generates all the possible states of the world for the first event date, then moves on to the second event date, and so on until it reaches the final maturity. Step‐wise simulation is natural with particular random generators, control variates (when paths are modified after simulation so that the expectation of some function matches some target), and, more generally, calibration inside simulations as in the particle method of [16].
The concepts and code in this book apply to path‐wise Monte‐Carlo simulations, but they can be extended to support step‐wise Monte‐Carlo and backward induction algorithms.6 This is however not discussed further. Note that path‐wise Monte‐Carlo includes deterministic models for linear products (these models generate a single path where all forwards are realized) and numerical integration for European options (where each path is a point in the integration scheme against the risk neutral probability distribution of the relevant market variable).
Monte‐Carlo simulations are explained in detail in [19] [15], and the dedicated chapters of [20], as well as our publication [27], which provides a framework and code in C++, and focuses on a parallel implementation.
We refer to the figure on page 15 for an illustration of the valuation process. The model produces a number of scenarios; every scenario is consumed by the evaluator to compute the payoff over that scenario. This is repeated over a number of paths, followed by an aggregation of the results. The evaluator is the object that computes payoffs (sums of cash flows normalized by the numeraire, which is also part of the scenarios, as described in chapter 5) as a function of the simulated scenarios. The evaluator conducts its work by traversal of the expression tree that represents the script, while maintaining an internal stack for the storage of intermediate results. We describe the evaluator in detail in section 3.6.
The evaluation of the script over a path is typically executed a large number of times; therefore, the overall performance is largely dependent on its optimization. This is achieved by many forms of pre‐processing.