Читать книгу A Risk Professional's Survival Guide - Rossi Clifford - Страница 15

CHAPTER 2
Overview of Financial Risk Management
RISK MANAGEMENT DEFINED

Оглавление

Risk management describes a collection of activities to identify, measure, and ultimately manage a set of risks. People and organizations confront risks every day: For example, an individual decides to leave a relatively secure job for another with better opportunity and compensation across country, a government faces the threat of terrorist attacks on public transportation, or a bank determines which financial products it should offer to customers. While some risks are fairly mundane and others a matter of life or death at times, the fundamental process for assessing risk entails evaluation of trade-offs of outcomes depending on the course of action taken. The complexity of the risk assessment is a function of the potential impact from a particular set of outcomes; the individual deciding to take a different job is likely to engage in a simpler risk assessment, perhaps drawing up a pros and cons template, while a government facing terrorist threats might establish a rigorous set of quantitative and surveillance tools to gather intelligence and assign likelihoods and possible effects to a range of outcomes.

Regardless of the application or circumstance, each of the assessments above has a common thread, namely, the assessment of risk. But what exactly is risk and is it the same across all of these situations? Risk is fundamentally about quantifying the unknown. Uncertainty by its very nature tends to complicate our thinking about risk because we cannot touch or see it although it is all around us. As human beings have advanced in their application of technology and science to problem solving, a natural evolution to assessing risk using such capabilities has taken place over time. Quantifying uncertainty has taken the discipline of institutional risk management to a new level over the past few decades with the acceleration in computing hardware and software and analytical techniques.

Risk and statistics share common ground as uncertainty may be expressed using standard statistical concepts such as probability. As will be seen later, while statistics provide an intuitive and elegant way to define risk, it nonetheless offers an incomplete way to fully understand risk due to inherent limitations on standard statistical theory and applications that do not always represent actual market behaviors. This does not imply that we should abandon statistical applications for assessing risk, but that a healthy dose of skepticism over accepting a purely analytical assessment of risk is a prerequisite to good risk management. As a starting point, basic statistical theory presents a convenient way of thinking about risk. Figure 2.1 depicts a standard normal probability distribution for some random variable x. The shape of the distribution is defined by two parameters, its mean or central tendency centered on 0 and the standard deviation, σ. If risk can be distilled to a single estimate, standard deviation is perhaps the most generalized depiction of risk, as it measures the degree to which outcomes stray from the expected outcome or mean level. More formally, standard deviation is expressed as shown in Equation 2.1.

(2.1)


Figure 2.1 Standard Normal Distribution and Area Under the Curve


where pi represents the probability of outcome i, and μ is the mean of the variable x. The variable x could reflect the returns from a product or service for a company, the compensation to an employee for a particular job, or the amount of collateral damage from a terrorist attack, for example. Despite the difference in the variable of interest, the one common aspect for all of these risks is that they can be measured by the standard deviation. Further, risks can be managed based on the tolerance for risky outcomes as may be represented by the distance of a specific set of outcomes from their expected level.

To further reinforce the concept of standard deviation as a measure of risk, consider the returns for the firm shown in Table 2.1. There are nine different annual return outcomes representing x in Equation 2.1. The average of these scenarios is 11.3 percent. The deviations of each outcome from that mean (m μ) are shown as (xμ)2 and that result is multiplied by each outcome’s probability. The sum of these probability-weighted squared deviations represents the variance of the firm’s annual returns. Taking the square root of the variance yields the standard deviation of 5.91 percent. That would mean that 68 percent of the firm’s potential return outcomes should lie between (11.3 – 5.91) and (11.3 + 5.91) or 5.39 and 17.21 percent, respectively.


Table 2.1 Example Calculation of Standard Deviation of Firm Annual Returns


Take the case of a company that faces whether to engage in a certain business activity or not. The firm obtains a set of historical data from the last several years of returns on similar products provided by other competitors. Suppose now the mean return for the product is 15 percent with a standard deviation of 5 percent. Using the information from the standard normal distribution in Figure 2.1, the company can begin to shape its view of risk. First, the distribution of returns takes on a similar symmetric shape as the standard normal curve shown in Figure 2.1. Under such a distribution, outcomes that deviate significantly from the average come in two forms: some that create very large positive returns above the 15 percent shown on the right-hand side of the distribution, and some that create corresponding returns smaller than 15 percent. The company realizes that returns less than 15 percent (its cost of capital) would drain resources and capital away from the firm, thus destroying shareholder value. In this context, only returns below 15 percent create risk to the company. The company now focuses on the left-hand tail, paying particular attention to how bad returns could be. The distribution’s y-axis (vertical) displays the frequency, or percentage of time, that a particular return outcome would be observed. According to the standard normal distribution, approximately 68 percent of the time returns would be between plus and minus 1 standard deviation from the mean. In this case we should find returns between 10 and 20 percent occur about 68 percent of the time. But moving out two or three standard deviations in either direction would capture 95 and 99.7 percent of the occurrences, respectively. However, with the focus only on low-return events, the company only needs to understand the frequency of these occurrences in assessing its project risk. In this example, outcomes that generate returns between 10 and 15 percent occur about 34 percent of the time. If the company were to look at adverse outcomes that are –2 standard deviations away from the mean, then returns between 5 and 15 percent would occur about 47.5 percent of the time. At this point, the company would need to think about what would happen if they were to observe a return of 10 percent versus 5 percent. If, for instance, the company had information to suggest that if returns reached 5 percent it would have to shut down, this would pose an unacceptable level of risk for the firm that it would want to guard against. As a result, it might establish a threshold that it will engage in products where there is a 97.5 percent chance that returns would not fall below 5 percent. Notice that since half of the outcomes fall above a 15 percent return and that 47.5 percent of the outcomes fall between 5 and 15 percent (one half of the 95 percent frequency assuming +/–2 standard deviations from the mean), then the portion of the area under the distribution accounting for returns worse than 5 percent would be 2.5 percent.

Such use of statistics provides risk managers with easy-to-apply metrics of how much risk may exist and how much risk should be tolerated based on other considerations such as the likelihood of insolvency. But blind use of statistics can at times jeopardize the company should actual results begin to vary significantly from historical performance. In such cases formal measures of risk as based on statistical models must be validated regularly and augmented when needed by experience and seasoned judgment. Such considerations bring to mind the need to characterize risk management in situational terms for the existence of uncertainty in any risk management problem implies that circumstances specific to each problem can and will affect outcomes that might not be precisely measured using rigorous analytical methodologies based on historical information.

Situational Risk Management

As the phrase implies, situational risk management is a way of assessing risk that takes into account the specific set of circumstances in place at the time of the assessment. It could include the market and economic conditions prevailing at the time, the set of clients or customers of a set of products posing risk, their behavior, business processes, accounting practices, and regulatory and political conditions, among other factors to take into consideration. And complicating the problem a bit more is the need to take these factors into account in projecting potential future outcomes. All of this may seem daunting to the risk manager who is facing how to assess risk based on the unique situation of the particular problem.

If we could teleport back to 2004 into a major mortgage originator’s risk management department, it might provide some insights into the nature of situational risk management. Consider the heads of risk management of two large mortgage originators facing whether to expand their mortgage production activities. Both firms face extraordinary pressures on their businesses due to commoditization of prime mortgages that are typically sold to the government-sponsored enterprises Fannie Mae and Freddie Mac. As a result, prices for these loans have squeezed profit margins to a point that other sources of revenue are required for the long-term sustainability of the franchise. As a result, one of the companies, X Bank (a mortgage-specializing thrift) decides that it needs to compete with other major players in loans that feature riskier combinations than they have traditionally originated. X Bank has over time acquired other smaller thrifts and banks focused on mortgage lending and this has led to a number of deficiencies and gaps in the way mortgage loans are underwritten. Fortunately, the economic environment has been extremely favorable, with low interest rates and high home price appreciation contributing to low default rates. These conditions thus have masked any problems that might cause X Bank higher losses for the time being. The other bank, Z Bank faces the same conditions; however, it is more diversified as a commercial bank and in growing organically over time has put in place strong processes and controls for all facets of the underwriting and servicing segments of the mortgage business. Further distinguishing the two firms is their differing reliance on analytic methods and data. X Bank has employed for several years relatively sophisticated data mining and simulation-based techniques to assess risk. Meanwhile Z Bank has just begun to develop risk data warehouses and building modeling capabilities to assess mortgage credit risk. It normally used simple measures of default risk that do not take into consideration possible changes in market conditions that could affect future credit risk outcomes. In its place, Z Bank has come to rely on the expertise of former underwriters put into their Quality Control department. Their job principally has been to perform postorigination reviews of originated mortgages and determine whether there have been any defects in the underwriting process that could pose risk to the firm.

In deciding whether to take on additional credit risk, X Bank relies on what it believes to be its comparative advantage: risk analytics. With losses on riskier segments of their business extraordinarily low, X Bank is satisfied that its estimates of credit risk are stable and reflect the underlying conditions in the market. Given this view, X Bank elects not to build up much of a quality control unit or to integrate their findings into credit-risk discussions. Z Bank, on the other hand, recognizes its limitations in its analytic capabilities and that even if it had such an infrastructure, it would be of limited value since the current environment is completely unlike any seen in recent memory. Consequently, they believe that using analytics exclusively to assess the amount of credit risk in their portfolio would need to be augmented by other factors including input from seasoned underwriters who have experience originating riskier mortgages.

The decision framework that both firms use to determine the amount of product risk each is willing to take on is dependent upon the common and unique set of circumstances (the situation) each bank confronts. X Bank believes it has better information and analytics by which to expand its business and be more competitive against other firms like Z Bank. At the same time, the QC department of Z Bank has concluded that the risks involved in expanding the product underwriting criteria are not sufficiently well understood to warrant taking on what appears to be higher risk. Z Bank management concurs with this conclusion despite the toll on market share this decision will cause, based on an understanding of the limitations of their data and analytics to accurately assess the amount of credit risk that could potentially accumulate should market conditions appreciably change.

By late 2007, the results from X and Z Banks’ decisions are clear. In the years following the original decision, the economy stalled, leading to one of the worst housing markets since the Great Depression. With home prices depreciating at double-digit rates and unemployment rising to 10 percent, credit losses on the riskier mortgages grew to levels that were multiples above what X Bank had estimated them to be in 2004. With their loan-loss reserves well understated for this risk and their capital levels weakening, X Bank experiences a run on its deposits that eventually leads to the closure of the bank by its regulator. In the years leading up to this event, X Bank had become the dominant mortgage originator, but did so at the expense of good risk management practices. Meanwhile, Z Bank largely avoided the mortgage credit meltdown by staying the course with its existing product set. That strategy wound up costing the firm several points of market share, but in the aftermath of the crisis the bank managed to pick up a major mortgage originator and through that combination regained a top-three position in the market while effectively managing its risk exposure.

A lesson from this example is that risk management decisions are highly dependent on the unique situation of the firm and it is essential that risk managers have their pulse on the factors that drive risk-taking. Dissecting the hypothetical case, X Bank risk managers relied too heavily on analytics at the expense of seasoned judgment, which in a period of unusually good credit performance should have signaled a greater emphasis on understanding the processes and controls underlying the underwriting activity. The situation in this case for X bank featured an accommodating economic environment, strong analytic capabilities based on historical information, aggressive management orientation toward market share at the expense of prudent risk-taking, and a limited appreciation for underwriting experience. Z Bank, facing the same economic conditions, came to a different conclusion and set of outcomes as a result. But in several important respects its situation was much different. It recognized its limitations in data and analytics and acknowledged its prowess in understanding the underwriting process and controls required to originate mortgages that could withstand different market conditions. Futhermore it had a management team that embraced its risk manager’s recommendations – not an insignificant factor that led to Z Bank’s making the right risk decision in the end.

Situational risk management thus is a case-by-case assessment of the factors influencing risk decisions. Figure 2.2 provides a framework for conceptualizing situational risk management. The primary activities of the risk manager of identifying, measuring, and managing the various risks of the company are influenced heavily by a number of internal and external factors at any moment. Clearly market, industry, and political forces establish an economic and regulatory environment that serve as a backdrop to risk management activities. The period leading up to the financial crisis of 2008–2009 was characterized by robust economic growth, relatively relaxed regulatory oversight, and fierce competition among financial institutions. This environment influenced corporate attitudes and perspectives on risk-taking and risk management. With markets and assets performing relatively well during the period, risk outcomes in the form of credit losses and other measures of risk performance were unusually low. Coupled with strong competitive conditions, risk management took on a secondary role to growth and financial performance prime directives. In such an environment, the risk manager faces significant headwinds in outlining a case for maintaining risk discipline when historical measures of risk are low and competition is high. Consider a risk manager’s situation in 2005 in establishing a view of mortgage credit risk for X Bank. As shown in Figure 2.2, home prices in the years leading up to 2005 had shown remarkable appreciation at the national level with most markets performing well above the long-term average. Armed with a formidable array of quantitative analytics to estimate expected and unexpected credit losses on the bank’s portfolio, the data would suggest that such a strong housing market would lead to low credit losses for the portfolio. Management during such periods can be biased against activities that will raise costs or impede business objectives, as reviewed in more detail in Chapter 3. While a strong risk culture and governance process can significantly mitigate management tendencies to marginalize risk departments, the risk management team must remain vigilant in the performance of its core activities and in regular and objective assessment of future performance. During such times, pressures to accede to business objectives rise, placing countervailing motivations on the risk manager that can influence his interpretation of risk-taking and prospective risk outcomes. Once the crisis began, as unprecedented risks emerged and many financial institutions failed, external conditions promoted a very different climate for risk management, where regulatory oversight of the financial industry stiffened and banks retrenched in an effort to stave off financial collapse as their capital deteriorated under the mounting pressures of large credit losses. In such an environment, greater focus on risk management, in part out of regulatory and financial necessity, becomes of paramount importance. Such vastly different internal and external conditions may introduce a set of tendencies for management, regulators, and risk managers to overreact. In such circumstances, underwriting standards may tighten to abnormal levels, resulting in a procyclical response that exacerbates the market downturn. Risk managers can seize this moment to strengthen not only the firm’s risk infrastructure but to shore up any deficiencies in governance and culture that may have been lacking previously.


Конец ознакомительного фрагмента. Купить книгу
A Risk Professional's Survival Guide

Подняться наверх