Читать книгу Business Risk and Simulation Modelling in Practice - Rees Michael - Страница 10
Part I
An Introduction to Risk Assessment – Its Uses, Processes, Approaches, Benefits and Challenges
CHAPTER 1
The Context and Uses of Risk Assessment
1.2 General Challenges in Decision-Making Processes
ОглавлениеThis section covers some of the general or contextual challenges in decision-making processes, including that of achieving an appropriate balance between rational considerations and intuition, as well as the possibility of the presence of a variety of biases.
1.2.1 Balancing Intuition with Rationality
Most decisions are made based on a combination of intuition and rational considerations, with varying degrees of balance between them.
Intuitive approaches are typically characterised, driven or dominated by:
• Gut feel, experience and biases.
• Rapid decision-making with a bias to reinforce initial conclusions and reject counter-narratives.
• Ignoring or discounting items that are complex or not understood well.
• Little (formalised) thinking about risks, uncertainties and unknowns.
• Little (formalised) decision processes or governance procedures.
• Lack of transparency into decision criteria and the importance placed on various items.
• Seeking input from only a small set of people, rather than from a diverse group.
At its best, intuitive decision-making can be powerful and effective, e.g. low investment nevertheless resulting in a good decision (generally). Indeed, justification for such approaches can be made using the framework of “pattern recognition”; that is, the decision-maker (typically subconsciously) views the particular situation being faced as being similar (or identical for decision purposes) to other situations that have been experienced many times before. Thus, such approaches are most appropriate where a particular type of situation is faced frequently, or where the consequences of a poor decision are not significant (or can be reversed), or in emergency situations where a very rapid decision is required. Examples include:
• Planning at what time to leave to travel to work in the morning, which may be based on many years of (non-documented) experience of using the same route.
• An experienced driver who is not overtly conscious of conditions on a road that he drives frequently, but is nevertheless making constant implicit decisions.
Of course, intuitive-driven approaches can have their more extreme forms: an article in The New York Times of 20 October 2013 (“When C.E.O.'s Embrace the Occult”) reports the widespread use of fortune tellers by South Korean executives facing important decisions.
Rational approaches can be contrasted with intuitive ones, and are characterised by:
• Non-reliance on personal biases.
• Strong reliance on analysis, models and frameworks.
• Objective, holistic and considered thinking.
• Self-critical: ongoing attempts to look for flaws and possible improvements in the process and the analysis.
• Openness to independent review and discussion.
• Formalised processes and decision governance.
• Setting objectives and creating higher levels of transparency into explicit decision criteria.
• A desire to consider all factors that may be relevant, to incorporate alternative viewpoints, the needs of different stakeholders, and to achieve diverse input from various sources.
• Explicitly searching out more information, a wide variety of diverse inputs and the collection of data or expert judgement.
• Openness to use alternative tools and techniques where they may be appropriate.
• Willingness to invest more in time, processes, tools and communication.
• Exposing, challenging, overcoming or minimising biases that are often present in situations where insufficient reflection or analysis has taken place.
• (Usually) with some quantification and prioritisation.
• (Ideally) with an appropriate consideration of factors that may lead to goals being compromised (risks and uncertainties).
Many decisions are made based on a combination of intuition and rational considerations; clearly formalised risk assessment is concerned in principle with increasing the rational input into such processes.
Intuitive approaches may be less reliable for decisions concerned with major investment or with very long-term implications; it would seem logical that no management team could genuinely have already had very significant experience with large numbers of very similar or identical projects over their full life cycle.
On the other hand, it is probably fair to say that intuition is generally the dominant force in terms of how decisions are made in practice:
• A course of action that “feels” wrong to a decision-maker (but is apparently supported by rational analysis) is unlikely to be accepted. Similarly, a course of action that “feels right” to a decision-maker will rarely be rejected, even if the analysis would recommend doing so; rather, in each case, invariably one would search for factors that have been incorrectly assessed (or omitted) from the rational approach. These may include important decision criteria that were overlooked, or other items that a team conducting the analysis was not aware of, but which were relevant from a decision-maker's perspective.
• In most business situations, there will almost always be some characteristics that are common from one project to another (otherwise the company may be straying from its core competence), and hence intuitive processes have some role. As a result, even where the use of rational approaches would seem appropriate (e.g. major investments, expansion or restructuring projects), such approaches may not receive the priority and attention that they deserve.
• The rational approaches are more complex to implement, requiring higher levels of discipline, extra time and potentially other investments; intuitive processes require less effort, and match many people's inherent personal preference for verbal communication and rapid action. In this context, some well-known quotes come to mind: “Opinion is the medium between knowledge and ignorance” (Plato), and “Too often we enjoy the comfort of opinion without the discomfort of thought” (John F. Kennedy).
• However much rational analysis has been conducted, management judgement (or intuition) will typically still need to play an important role in many decisions: very few situations can be understood perfectly, with all factors or risks identified and correctly captured. For example, some qualitative factors may not have been represented in the common terms required for a quantitative model (i.e. typically in financial terms). In addition, and as a minimum, there will always be some “unknown unknowns” that decision-makers need to be mindful of.
Thus, ideally a robust and objective rational analysis would help to develop and inform a decision-maker's intuition (especially in the earlier stages of a decision process), and also to support and reinforce it (in later stages). Where there is a mismatch between the intuition of a particular decision-maker and the results of a rational analysis, in the first instance, one may look for areas where the rational analysis is incomplete or based on incorrect assumptions: there could be factors that are important to a decision-maker that an analytic-driven team is not aware of; ideally these would be incorporated as far as possible in revised and more robust rational analysis. On the other hand, there may be cases where even once such factors are included, the rational and intuitive approaches diverge in their recommendations. This may lead one to be able to show that the original intuition was incorrect and also to the drivers of this; of course, generally in such cases, there may be extra rounds of communication that are required with a decision-maker to explain the relevant issues. In other words, genuinely rational and objective analysis should be aligned with intuition, and may serve to modify understanding and generate further intuition in parallel.
1.2.2 The Presence of Biases
The importance of intuitive decision-making, coupled with the presence of potential biases, will create yet more challenges to the implementation of rational and disciplined approaches to risk assessment. Biases may be thought of as those that are:
• Motivational or political. These are where one has some incentive to deliberately bias a process, a set of results or assumptions used.
• Cognitive. These are biases that are inherent to the human psyche, and often believed to have arisen for evolutionary reasons.
• Structural. These are situations where a particular type of approach inherently creates biases in the results, as a result of the methodology and tools used.
Motivational or political biases are common in many real-life decision situations, often resulting in optimistic scenarios being presented as a base case, or risks being ignored, for many reasons:
• The benefits and cost may not have unequal or asymmetric impacts on different entities or people. For example, project implementation may allow (or require) one department to expand significantly, but may require another to be restructured.
• “Ignorance is bliss.” In some cases, there can be a lack of a willingness to even consider the existence of risks. There are certainly contexts in which this reluctance may be justified (in terms of serving a general good): this would most typically apply where the fundamental stability of a system depends on the confidence of others and credibility of actions, and especially where any lack of confidence can become detrimental or self-fulfilling. In such cases, the admission that certain risks are present can be taboo or not helpful. For example:
• A banking regulator may be reluctant to disclose which institutions are most at risk from bankruptcy in the event of a severe economic downturn. The loss of confidence that may result could produce a run on the bank, in a self-fulfilling cycle (in which depositors withdraw their money due to perceived weakness, which then does weaken the institution in reality, and also may have a knock-on effect at other institutions).
• A central bank (such as the European Central Bank) may be unwilling to publicly admit that certain risks even exist (for example, the risk of a currency break-up, or of one country leaving the eurozone).
• Generally, some potential credit (or refinancing) events may be self-fulfilling. For example, a rumour (even if initially false) that a company has insufficient short-term funds to pay its suppliers may lead to an unwillingness on the part of banks to lend to that company, thus potentially turning the rumour into reality.
• A pilot needing to conduct an emergency landing of an aeroplane will no doubt try to reassure the passengers and crew that this is a well-rehearsed procedure, and not focus on the risks of doing so. Any panic within the passengers could ultimately be detrimental and hinder the preparations for evacuation of the aircraft, for example.
• Accountability and incentives. In some cases, there may be a benefit (or perceived benefit) to a specific party of underestimating or ignoring risks. For example:
• In negotiations (whether about contracts, mergers and acquisitions or with suppliers), the general increased information and transparency that is associated with admitting specific risks exist could be detrimental (to the party doing so).
• Many publicly quoted companies are required to make a disclosure of risks in their filing with stock market regulators. Generally, companies are reluctant to provide the information in any more detail than is mandated, in order not to be perceived as having a business that is more risky than competitors; a first-mover in such disclosure may end up with a consequential drop in share price. Therefore, such disclosures most typically are made at a very high level, are rather legalistic in nature and generally do not allow external analysts to truly understand or model risks in the business in practice.
• “Don't worry, be happy” (or “We are too busy to (definitely) spend time considering things that may never happen!” or “You are always so pessimistic!”). In a similar way to the “ignorance is bliss” concept, since identified risks are only potential, and may never happen, there is often an incentive to deny that such risks exist, or that they are not material, or to insist that they can be dealt with on an ad hoc basis as they arise. In particular, due to implementation time and other factors, it is often the case that accountability is only considered at much later points in time (perhaps several years); by which time the truly accountable person has generally moved to a different role, been promoted, or retired. In addition, defenders of such positions will be able to construct arguments that the adverse events could not have been foreseen, or were someone else's responsibility, or were due to non-controllable factors in the external environment, and so on. Thus, it is often perceived as being more beneficial to deny the existence of a problem, or claim that any issues would in any case be resolvable as they arise. For example:
• A senior manager or politician may insist that a project is still on track despite some indications to the contrary, although the reality of the poor outcome is only likely to be finally seen in several years or decades.
• A manager might not admit that there is a chance of longer-term targets being missed or objectives not being met (until such things happen).
• A project manager might not want to accept that there is a risk of a project being delivered late, or over budget, or not achieving its objectives (until the events that provoke it actually occur).
• Management might not want to state that due to a deterioration in business conditions there is a risk that employees will be made redundant (until it actually happens).
• A service company bidding for a contract against an established competitor may claim that they can provide a far superior level of service at a lower cost (implicitly ignoring the risks that this might not be achievable). Once the business has been secured, then “unexpected” items start to occur, by which time it is too late to reverse the contract award. Unless the negotiated contracts have clear service-level agreements and penalty clause elements that are adequate to compensate for non-delivery on promises, such deliberate “low balling” tactics by potential suppliers may be rational; on the other hand, if one bids low and is contractually obliged to keep to that figure, then a range of significant difficulties could arise, so that such tactics may not be sensible.
• Often clauses may exist in contracts that would only apply in exceptional circumstances (such as if consequential damages may be sued for if a party to the contract delivers a performance that is materially below expectations). During contract negotiations, one or other party to the contract may insist that the clause should stay in the contract, whilst maintaining that it would never be enforced, because such circumstances could not happen.
Specific examples that relate to some of the above points (and occurred during the time at which this book was in the early stages of its writing) could be observed in relation to the 2012 Olympic Games in London:
• The Games were delivered for an expenditure of approximately £9bn. The original cost estimate submitted to the International Olympic Committee was around £2bn, at a time when London and Paris were in competition to host the games. Shortly after the games were awarded to London in July 2005, the budget estimate was revised to closer to £10bn, resulting (after the Games) in many media reports stating that they were “delivered within budget”. Some of the budget changes were stated as being due to heightened security needs following a major terrorist attack that occurred in London shortly after the bid was awarded (killing over 50 people). Of course, one can debate such reasons in the context of the above points. For example, the potential terrorist threat was already quite clear following the Madrid train bombings of 11 March 2004 (which killed nearly 200 people), the invasion of Iraq in 2003, and the attacks in the United States of 11 September 2001, to name a few examples; security had also been a highly visible concern during the 2004 Athens Olympics. An external observer may hypothesise that perhaps a combination of factors each played a role to some extent, including the potential that the original bid was biased downwards, or that the original cost budget had been estimated highly inaccurately. In any case, one can see the difficulty associated with assigning definitive responsibility in retrospect, and hence the challenge in ensuring that appropriate decisions are taken in the first place.
• A private company had been contracted by the UK government to provide the security staff for the Games; this required the recruitment and training of large numbers of staff. Despite apparently having provided repeated reassurances that the recruitment process for the staff was on track for many months, at the last minute (in the weeks and days before the Games) it was announced that there was a significant shortfall in the required staff, so that several thousand soldiers from the UK Armed Forces were required to step in. An external observer may hypothesise that the private company (implicitly by its actions) did not perceive a net benefit to accepting or communicating the existence of the risk of non-delivery until the problem became essentially unsolvable by normal means.
Cognitive biases are those that are often regarded as resulting from human beings' evolutionary instinct to classify situations into previously observed patterns, which provides a mechanism to make rapid decisions (mostly correctly) in complex or important situations. These include:
• Optimism. The trait of optimism is regarded by many experts as being an important human survival instinct, and generally inherent in many individual and group processes.
• Bias to action. Management rewards (both explicit and implicit) are often based on the ability to solve problems that arise; much rarer is to create rewards around lack of action, or for the taking of preventive measures. The bias to action rather than prevention (in many management cultures) can lead to lack of consideration of risks, which are, after all, only potential and not yet tangibly present.
• Influence and overconfidence. This refers to a belief that we have the ability to influence events that are actually beyond our control (i.e. that are essentially random). This can lead to an overestimation of one's ability to predict the future and explain the past, or to an insufficient consideration of the consequences and side effects. A poor outcome will be blamed on bad luck, whereas a favourable one will be attributed to skill:
• A simple example would be when one shakes dice extra hard to try to achieve certain numbers.
• People may make rapid decisions about apparently familiar situations, whereas in fact some aspect may be new and pose significant risks.
• Arguably, humans are reasonable at assessing the effects of, and managing, individual risks, but much less effective at assessing the effects and combinations when there are multiple risks or interdependencies between them, or where the behaviour of a system (or model) output is of a non-linear nature as its input values are altered.
• Anchoring and confirmation. This means that the first piece of information given to someone (however misleading) tends to serve as a reference point, with future attitudes biased to that point. New information becomes selectively filtered to tend to try to reinforce the anchor and is ignored or misinterpreted if the information does not match the pre-existing anchor. One may also surmise that many educational systems (especially in the earlier and middle years) emphasise the development of students' ability to create a hypothesis and then defend this with logic and facts, with at best only a secondary focus on developing an enquiring mind that asks why an analysis or hypothesis may be wrong. The bias of confirmation describes that there is typically more focus on finding data that confirm a view than in finding data to disprove or question it.
• Framing. This means the making of a different decision based on the same information, depending on how the situation is presented. Typically, there is a different behaviour when faced with a gain versus when faced with a loss (one is more often risk seeking when trying to avoid losses, and risk averse when concerned with possible gains):
• A consumer is generally more likely to purchase an item that is reduced in price from $500 to $400 (that is, to “save” $100), than to purchase the same item if it had always been listed at $400.
• An investor may decide to retain (rather than sell) some shares after a large fall in their value, hoping that the share price will recover. However, when given a separate choice as to whether to buy additional such shares, the investor would often not do so.
• Faced with a decision whether to continue or abandon a risky project (after some significant investment has already been made), a different decision may result depending on whether the choice is presented as: “Let's continue to invest, with the possibility of having no payback” (which is more likely to result in the project being rejected) or “We must avoid getting into a situation where the original investment was wasted” (which is more likely to result in a decision to continue).
• Framing effects also apply in relation to the units that are used to present a problem. For example, due to a tendency to think or negotiate in round terms, a different result may be achieved if one changes the currency or units of analysis (say from $ to £ or €, or from absolute numbers to percentages).
• Incompleteness. Historical data are inherently incomplete, as they reflect only one possible outcome of a range of possibilities that could have occurred. The consequence is that (having not observed the complete set of possible outcomes) one assumes that variability (or risk) is lower than it really is. A special case of this (sampling error) is survivorship bias (i.e. “winners” are observed but “losers” are not). For example:
• For stock indices, where poorly performing stocks are removed from the index to be replaced by stocks that have performed well, the performance of the index is overstated compared to the true performance (which should use the original basket of stocks that made up the index, some of which may now be worthless).
• Similarly, truly catastrophic events that could have wiped out humanity have not yet occurred. In general, there can be a failure to consider the possible extremes or situations that have never occurred (but could do so in reality), specifically those associated with low probability, large impact events. Having said that (as discussed in Chapter 2) the consideration and inclusion in analysis of truly rare events (especially those that are, in principle, present in any project context, such as an asteroid destroying life on the planet) are probably in general not relevant to include in project and business risk assessments, or for management decision-making.
• Group think. A well-functioning group should, in principle, be able to use its diversity of skills and experience to create a better outcome than most individuals would be able to. However, very often, the combination of dominant characters, hierarchical structures, an unwillingness to create conflict, or a lack of incentive to dissent or raise objections, can instead lead to poorer outcomes than if decisions had been left to a reasonably competent individual. The fact that individual failure is often punished, whereas collective failure is typically not, provides a major incentive for individuals to “go with the pack” or resort to “safety in numbers” (some argue this provides part of the explanation for “bubbles” and over-/underpricing in financial markets, even over quite long time periods).
Structural biases are where particular types of approach inherently create bias in the results, independently of psychological or motivational factors. An important example is a static model populated with most likely values that will, in general, not show the most likely value of the true output range (the “fallacy of the most likely”, as discussed in Chapter 4). Key driving factors for this include non-symmetric distributions of uncertainty, non-linear model logic or the presence of underlying event risks that are excluded from a base assumption. The existence of such biases is an especially important reason to use risk modelling; paraphrasing the words of Einstein, “a problem cannot be solved within the framework that created it”, and indeed the use of probabilistic risk techniques is a key tool to overcoming some of these limitations.