Читать книгу Real-Time Risk - Aldridge Irene - Страница 13

CHAPTER 1
Silicon Valley Is Coming!
FAST ANALYTICS

Оглавление

TransferWise and loan‐issuing apps did not emerge as a function of an ability to quickly send requests on the go. Beneath every successful money transfer and loan approval is a complex analysis that determines the risk of each operation.

At the core of all the super‐fast information sharing is data analytics. Take, for instance, any near‐instantaneous loan approval process. All loans are subject to credit risk – the risk that the loan is not repaid on time, if at all. Typically, the higher the probability that the loan is repaid in full and on schedule, the lower are the interest rates the lender needs to charge the borrower to make the transaction worthwhile. The reverse also holds: The higher is the probability that the borrower defaults, the higher are the rates the lender needs to charge to compensate for the risk of a default. The creditworthiness of the borrower can be forecasted using various factors, of which free cash flow and its relationship to the existing short‐term and long‐term debt, as well as other factors from Edward Altman's model, are critical. The ability to gather and process the required data points in real time are making the here‐and‐now loan approvals possible.

In general, risk, to many financial practitioners, has implied a multiday Monte Carlo simulation, something impossible to accomplish in a matter of hours, let alone seconds. Now, with new technologies, über‐fast processing of data is not only feasible, it is already in deployment in many applications.

How does data processing accelerate over time? Several applications running atop cloud architecture help dissect vast amounts of data faster than a blink of an eye. MapReduce was a first generation of fast software that allowed data mining extensive volumes of information and helped propel Google Analytics to its current lead. Still, newer, faster applications are here. Spark, an application that also runs on top of a cloud architecture, outperforms MapReduce and delivers lightning‐fast inferences through advanced management of computer resources, data allocation, and, ultimately, super‐fast computational algorithms rooted in the same technology that allows real‐time image and signal processing.

To understand why customers make decisions, companies harness the data available to them. In the past, customer segmentation studies were fixed in a point in time and used a variety of analytical approaches. Why go through this effort? By identifying types of customers who have similar tendencies to make similar decisions, a company can tailor their marketing, products, and investments. But that is the traditional approach.

With all forms of transactional and social data available and with enormously more computing power, companies can predict future behavior of clients almost at the same pace as clients are making their own decisions. For example, where will the aggressive high‐frequency traders trade in five minutes? New technologies, such as the one of several offered by AbleMarkets, can answer this question on the fly.

Traditional players need to review their technology spend and consider that while they are making incremental improvements, their clients may be evaluating a leap to an insurgent with a category‐killing new app.

Not only are startups working to provide discrete services with the likes of Google but also entire business models are being created to challenge established ways of doing business. For example, robo‐investing is a substitute for online brokers as well as full‐service brokers and financial planners. The idea has been around for a while; however, in the last five years the momentum has started to grow. According to Corporate Insights, robo‐advisers had gathered $20 billion in assets by the end of 2014, which is a small portion of the $24 trillion in retirement assets in the United States. The growth and the high‐profile venture capital funding of Betterment and Wealthfront have led players such as Vanguard to launch their own robo‐advisers. The growth of these companies is a topic the entire investment management industry is watching and the question becomes will the baby boom generation adopt this form of wealth management in their retirement or is this service geared to the millennials.

The innovation to use predictive technology is not just about consumer habits. Of course, future fintech solutions will churn through transaction history to spot trends and use that information to provide intelligent recommendations on decisions such as what credit card to pay off first, how much to put down on a home, or how to save for a new car. They'll even suggest things like whether it's better to buy or lease a car. However, the majority of changes from predictive analytics will occur at the institutional level, resulting in sweeping organizational and operational changes at most financial services.

For institutional asset managers, predictive analytics assess future volatility, price direction and likely decisions by fund managers. A pioneer in predictive analytics for investment management is AbleMarkets, which brings aggressive high‐frequency trading (HFT) transparency to market participants. AbleMarkets estimates, aggregates, and delivers simple daily averages of aggressive HFT so that professionals can improve their prediction of the market's reaction to events, assessments of future volatility, and shorter‐term price movement. It is used for portfolio management, volatility trading, market surveillance by hedge funds, pension funds, and banks.

What is different now? Computers are now involved in many economic transactions and can capture data associated with these transactions, which can then be manipulated and analyzed. Conventional statistical and econometric techniques such as regression often work well, but there are issues unique to big data sets that may require different tools. First, the sheer size of the data involved may require more powerful data manipulation tools. Advanced databases and computer languages are required for most large data sets; after all, even the latest version of Excel stops at some one million rows. What if your data set contains five billion records? Second, we may have more potential predictors than appropriate for estimation, so we need to do some kind of variable selection. A popular technique called principal component analysis does just that: it estimates clusters of properties common among the records. Those clusters next become important variables in slicing and dicing the data. Third, large datasets may allow for more flexible relationships than simple linear models. Machine learning techniques such as decision trees, support vector machines, neural nets, deep learning, and so on may allow for more selective ways to model complex relationships.


Конец ознакомительного фрагмента. Купить книгу
Real-Time Risk

Подняться наверх