Читать книгу Modern Computational Finance - Antoine Savine - Страница 8

CHAPTER 1 Opening Remarks INTRODUCTION

Оглавление

In the early stages of derivative markets, dedicated models were typically put together to value and risk manage new transaction types as they emerged. After Black and Scholes [5] published in 1973 a closed‐form formula for European options under a constant volatility assumption, alternative models—like the Cox‐Ross‐Rubinstein binomial tree [7] in 1979, later replaced by more efficient finite difference grids—were developed to value American options under the same assumptions.

As trading in derivatives matured, the range of complex transactions expanded and models increased in complexity so that numerical methods became necessary for all but the simplest vanilla products. Models were typically implemented in terms of finite difference grids for transactions with early exercise and Monte‐Carlo simulations for products with path‐dependency. Notably, models increased in dimension as they grew in complexity, making grids impractical in most cases, and Monte‐Carlo simulations became the norm, with early exercises typically supported by a version of the Longstaff‐Schwartz regression‐based algorithm [22]. Sophisticated models also had to be calibrated before they were used to value and risk manage exotics: their parameters were set to match the market prices of less complex, more liquid derivatives, typically European calls and puts.

Most of the steps involved—calibration, Monte‐Carlo path generation, backward induction through finite difference grids—were independent of the transactions being valued; therefore, it became best practice to implement models in terms of generic numerical algorithms, independently of products. Practitioners developed modular libraries, like the simulation library of our publication [27], where transactions were represented in separate code that interacted with models to produce values and risk sensitivities.


However, at that stage, dedicated code was still written for different families of transactions, and it was necessary in order to add a new product to the library, to hard code its payoff by hand, compile, test, debug, and release an updated software.

The modular logic could be pushed one step further with the introduction of scripting languages, where users create products dynamically at run time. The user describes the schedule of cash‐flows for a transaction with a dedicated language specifically designed for that purpose, for example:

STRIKE 100
01Jun2021 opt pays max( 0, spot() ‐ STRIKE)

for a 1y European call with strike 100, or

STRIKE 100
BARRIER 120
01Jun2020 vAlive = 1
Start: 01Jun2020
End: 01Jun2021 if spot() > BARRIER then vAlive = 0 endIf
Freq: weekly
01Jun2021 opt pays vAlive * max( 0, spot() ‐ STRIKE)

for the same call with a 120 (weekly monitored) knock‐out barrier.1

The scripts are parsed into expression trees, and visited by an evaluator, a particular breed of visitor, who traverses the trees, while maintaining the internal state, to compute payoffs over the scenarios generated by a model:


All of this is explained in deep detail with words and code in part I.

With scripting, finance professionals were able to create and modify a product on the fly, while calculating its price and risk sensitivities in real time. The obvious benefits of such technology quickly made it a best practice among key derivatives players and greatly contributed in itself to the development of structured derivatives markets.

Early implementations, however, suffered from an excessive performance overhead and a somewhat obscure syntax that made scripting inaccessible to anyone but experienced quantitative analysts and traders. Later implementations fixed those flaws. The modern implementation in this publication comes with a natural syntax, is accessible to non‐programmers,2 and its performance approaches hard‐coded payoffs.

This publication builds on the authors' experience to produce a scripting library with maximum scope, modularity, transparency, stability, scalability, and performance.

Importantly, our implementation transcends the context of valuation and sensitivities; it offers a consistent, visitable representation of cash‐flows that lends itself to a scalable production of risk, back‐testing, capital assessment, value adjustments, or even middle office processing for portfolios of heterogeneous financial transactions. We also focus on performance and introduce the key notion of pre‐processing, whereby a script is automatically analyzed, prior to its valuation or risk, to optimize the upcoming calculations. Our framework provides a representation of the cash‐flows and a way of working with them that facilitates not only valuation but also pre‐processing and any kind of query or transformation that we may want to conduct on the cash‐flows of a set of transactions.

Scripting makes a significant difference in the context of xVA, as explained in part V, and more generally, all regulatory calculations that deal with multiple derivatives transactions of various sophistication, written on many underlying assets belonging to different asset classes. Before xVA may be computed over a netting set,3 all the transactions in the netting set must be aggregated. This raises a very practical challenge and a conundrum when the different transactions are booked in different systems and represented under different forms. Scripting offers a consistent representation of all the transactions, down to their cash‐flows. Scripted transactions are therefore naturally aggregated or manipulated in any way. A key benefit of scripted cash‐flows is that scripts are not black boxes. Our software (more precisely, the visitors implemented in the software) can “see” and analyze scripts, in order to aggregate, compress, or decorate transactions as explained in part V, extract information such as path‐dependence or non‐linearity and select the model accordingly, implement automatic risk smoothing (part IV), or analyze a valuation problem to optimize its calculation. Our library is designed to facilitate all these manipulations, as well as those we haven't thought about yet.

The purpose of this publication is to provide a complete reference for the implementation of scripting and its application in derivatives systems to its full potential. The defining foundations of a well‐designed scripting library are described and illustrated with C++ code, available online on:

  https://github.com/asavine/Scripting/tree/Book-V1

Readers will find significant differences between the repository code and the code printed in the book. The repository has been undergoing substantial modernization and performance improvements not covered in this edition of the text. Make sure you connect to the branch Book‐V1, not the master. Besides, the code base evolves throughout the book and the online repository contains the final version. It is advisable to type by hand the code printed in the text rather than rely on the online repository while reading the book.

This code constitutes a self‐contained, professional implementation of scripting in C++. It is written in standard, modern C++ without any external dependency. It was tested to compile on Visual Studio 2017. The library is fully portable across financial libraries and platforms and includes an API, described in section 3.7, to communicate with any model.

The code as it stands can deal with a model of any complexity as long as it is a single underlying model. It works with the Black and Scholes model of [5] and all kind of extensions, including with local and/or stochastic volatility, like Dupire's [9] and [10], or single underlying stochastic volatility models.4 The library cannot deal with multiple underlying assets, stochastic interest rates, or advanced features such as the Longstaff‐Schwartz algorithm of [22]. It doesn't cover the management of transactions throughout their life cycle or deal with past fixings. All those features, necessary in a production environment, would distract us from correctly establishing the defining bases. These extensions are discussed in detail in parts II and III, although not in code.

Our online repository also provides an implementation of Fuzzy Logic for automatic risk smoothing, an excel interface to the library, a tutorial for exporting C++ code to Excel, a prebuilt xll, and a demonstration spreadsheet.

The C++ implementation is discussed in part I, where we explore in detail the key concepts of expression trees and visitors. We show how they are implemented in modern C++ and define the foundations for an efficient, scalable scripting library. We also provide some (very) basic self‐contained models to test the library, although the library is model agnostic by design and meant to work with any model that implements an API that we specify. For clarity, the code and comments related to parsing (the transformation of a text script into an expression tree) are moved from the body of the text into an appendix.

We discuss in part II the implementation of some basic extensions, and in part III more advanced features like interest rates and multiple currencies and assets. We discuss the key notion of indexing simulated data. Indexing is a special flavor of pre‐processing, crucial for performance. We also discuss the support for LSM, the regression‐based algorithm designed by Carriere and Longstaff‐Schwartz in [6] and [22] to deal with early exercise in the context of Monte‐Carlo simulations, and later reused in the industry in the context of xVA and other regulatory calculations. These parts include extensive guidance for the development of the extensions, but not source code.

The rest of the publication describes some applications of scripting outside the strict domain of pricing and demonstrates that our framework, based on visitors, can resolve many other problems.

Part IV shows how our framework can accommodate a systematic smoothing of discontinuities to resolve the problem of unstable risk sensitivities for products like digitals or barriers with Monte‐Carlo simulations. Smoothing consists of the approximation of the discontinuous payoffs by close continuous ones, like the approximation of digitals by tight call spreads. Bergomi discusses and optimizes smoothing in [4]. Our purpose is different. We demonstrate that smoothing can be abstracted as a particular application of fuzzy logic. This realization leads to an algorithm to systematically smooth not only digitals and barriers but also any payoff, automatically. The practical implementation of the algorithm is facilitated by the design of our scripting library. For clarity, the source code is not provided in the body of the text, but it is provided in our online repository.

Part V introduces the application to xVA, which is further covered in our upcoming dedicated publication. The code for xVA calculations is not provided.

Modern Computational Finance

Подняться наверх