Читать книгу The Black Swan Problem - Håkan Jankensgård - Страница 12

THE MOVING TAIL

Оглавление

At this point, we are ready to conclude that the basic nature of randomness is uncertainty. Known odds, probabilities in the purest sense of the word, are an interesting man‐made exception to that rule. If we accept that uncertainty is what we are dealing with, a natural follow‐up question is: What is uncertainty like? A distinction we will make in this regard is between ‘benign’ and ‘wild’ uncertainty.5 Benign uncertainty means that we do not have perfect knowledge of the underlying process that generates the outcomes we observe, but the observations nonetheless behave as if they conform to some statistical process that we are able to recognize. Classic examples of this are the distribution of things like height and IQ in a population, which the normal distribution seems to approximate quite well.

While the normal distribution is often highlighted in discussions about ‘well‐behaved’ stochastic processes, many other theoretical distributions appear to describe real‐world phenomena with some accuracy. There is nothing, therefore, in the concept of benign uncertainty that rules out deviations from the normal distribution, such as fat tails or skews. It merely means that the data largely fits the assumptions of some theoretical distribution and appears to do so consistently over time. It is as if we have a grip on randomness.

Wild uncertainty, in contrast, means that there is scope for a more dramatic type of sea change. Now we are dealing with outcomes that represent a clear break with the past and a violation of our expectations as to what was even supposed to be possible. Imagine long stretches of calm and repetition punctured by some extreme event. In these cases, what happened did not resemble the past in the least. Key words to look out for when identifying wild uncertainty is ‘unprecedented’, ‘unheard of’, and ‘inconceivable’, because they (while overused) signal that we might be dealing with a new situation, something that sends us off on a new path.

The crucial aspect of wild uncertainty is precisely that the tails of the distributions are in flux. In other words, the historically observed minimum and maximum outcomes can be surpassed at any given time. I will refer to the idea of an ever‐changing tail of a distribution as The Moving Tail. With wild uncertainty, an observation may come along that is outside the established range – by a lot. Such an event means that the tail of the distribution just assumed a very different shape. Put another way, there was a qualitative shift in the tail. Everything we thought we knew about the variable in question turned out to be not even in the ballpark.

An illustration of wild uncertainty and of a tail in flux is provided by ‘the Texas freeze’, which refers to a series of severe blizzards that took place in February 2021, spanning a 10‐day period. The blizzards and the accompanying low temperatures badly damaged physical structures, and among those afflicted were wellheads and generators related to the production and distribution of electricity. As the freeze set in, demand soared as people scrambled to get hold of whatever electricity they could to stay warm and keep their businesses going. In an attempt to bring more capacity to the market, the operator of the Texas power grid, Ercot, hiked the price of electricity to the legally mandated price ceiling of 9,000 $/MWh. The price had touched that ceiling on prior occasions – but only for a combined total of three hours. The extremeness of this event lay in the fact that Ercot kept it at this level for almost 90 consecutive hours.6 A normal trading range leading up to this point had been somewhere between 20–40 $/MWh.

Any analysis of this market prior to February 2021 would have construed tail risk as being about short‐lived spikes, which, when averaged out over several trading days, implied no serious market distress. The Texas freeze shifted the tail. It was a Black Swan. The consequences for market participants were massive,7 and there was nothing in the historical experience that convincingly pointed to the possibility that the price could or would remain at its maximum for 90 hours. After the fact, it looked obvious that something like that could happen. Prolonged winter freezes in Texas are very rare, but with the climate getting more extreme by the day, why not?

The ‘by a lot’ is actually an important qualifier of wild uncertainty. To see why, consider that whenever we have a dataset, some of the observations will represent the tail of the distribution. They are large but rare deviations from some more normal state. Let us say that we have, in a given dataset, a handful of observations that can be said to constitute the tail. There will be, by construction, a minimum and a maximum value, which are the most extreme values that history has had to offer so far.

Unless we are talking about a truly truncated distribution, like income having zero as the lower limit, it is a potential mistake to think that the ‘true’ underlying data‐generating process is somehow capped by the observed minimum and maximum values. If we feed all the observations we have into a statistical software, we can ask it to analyse which random process that most plausibly generated the patterns in the data. Now, if we immediately take the process identified by the programme and draw random values based on it in a simulation, it will come up with a distribution that contains outcomes that go beyond the lowest/highest observed values in the dataset without the probability of that dropping to virtually zero. This will always happen as long as the approach is to assume that there is some underlying random process generating the data and use real data to approximate it. It is as if the software doing the fitting ‘gets it’ that if we have observed certain extreme values, even more extreme observations cannot be ruled out. If we have observed a drop in the S&P 500 of minus 58% over a certain period of time, who would say that a drop of minus 60% is outside the realm of possibilities? The simulated extremes will lie somewhere to the left (right) of the minimum (maximum) observed in the data. The tail we model in this way will encompass the observed tail and then some.

The upshot of this discussion is that experiencing an outlier that is only somewhat more extreme than the hitherto observed minimum/maximum should fall within the realm of benign uncertainty. We should not be surprised or taken aback by it. There is an implied probability of that, meaningfully separate from zero, being handed to us by the fitted distribution. We have to add ‘by a lot’ for it to count as wild uncertainty, because then the tail has shifted dramatically and in a way that was by no means implied by the historical track record. It is an outlier so extreme that it has a probability of effectively zero, even when the underlying random process we use to form a view of the future has been fitted to all the tail events in the historical track record.

Under conditions of wild uncertainty, it is clear that the concept of probability starts looking increasingly subjective and unverifiable. Indeed, Taleb calls probability ‘the mother of all abstract concepts’ (Taleb, 2007, p. 133) and maintains that we cannot calculate probabilities of shocks (Taleb, 2012, p. 8).8 It is important to see, though, that his scorn is reserved mostly for those who insist on using the symmetric normal distributions and its close relatives. The properties of the normal are seductive because we can derive, with relative ease, all sorts of interesting results, but it is, Taleb maintains, positively dangerous as a guide to decision‐making in a world of wild uncertainty. Why? Primarily because of how it rules out extreme outliers and blinds us to them. A key feature of the normal distribution is that its tails quickly get thinner the further removed from the mean you move, which implies that their likelihood of happening gets lower and lower. In fact, as we move away from the mean, the assigned probabilities drop very fast – much too fast, in Taleb's view (Taleb, 2007, p. 234). The stock market crash in October 1987, for example, saw a return of minus 20.5%. The odds of a drop of at least that magnitude would have been roughly one in a trillion according to the normal. In other words, anyone going by that distribution would have considered it, for practical purposes, an impossible event.

The first priority, therefore, is to avoid the normal distribution like the plague. In its place, if we still feel compelled to work with probabilities, Taleb offers the idea of fractals. Fractals refer to a geometrical pattern in which the structure and shape of an object remains similar across different scales. The practical implication is that the likelihood of extreme events decreases at a much slower rate. If one subscribes to this view, the probability of finding an exceptionally large oil field is not materially lower than a large or medium‐sized one because the geological processes that generate them are scale‐independent. This relation between frequency and size is associated with the so‐called power law distributions, which we will relate to socio‐economic processes in Chapter 7. According to Taleb, the idea of fractals should be our default, the baseline mental model for how probabilities change as we move further out on the tail (Taleb, 2007, p. 262).

In many cases, we lack data that we can explore for mapping out the tail of a random process. In this kind of setting, uncertainty tends to be wildly out of the gate. Technological innovation fits right into this picture, because it brings novelty and injects it into the existing, already volatile, world order. New dynamics are set in motion, triggering unintended consequences and side effects that ripple through the system in an unpredictable fashion. Because we keep innovating, we also keep changing the rules of the game, forever adding to the complexity. Two Black Swans that have sprung from the onward march of technology are the emergence of the internet and the more recent invasion of social media and mobile phones into our lives. There was no existing dataset that we could have studied prior to them that might have suggested that such transformations of our reality were about to happen. Or, more importantly, that they were even possibilities at all. To appreciate how technologies that we are completely immersed in today and take for granted are actually Black Swans, cases of wild uncertainty, consider the words of Professor Adam Alter of New York University:

‘Just go back twenty years [to 2000] … imagine you could speak to people and say, hey, you are going to go to the restaurant and everyone's going to be sitting isolated and looking at a small device, and then they're going to go back home and spend four hours looking at that device, and then you're going to wake up in the morning and look at that device … and people are going to be willing to have body parts broken to preserve the integrity of that device … people would say that is crazy'9

Alter's thought experiment of going back 20 years in time and imagining talking to people about something highly consequential that later happened is a useful one for deciding whether something is to be considered a Black Swan. If you imagine their reaction to what you describe would be that it is ridiculous or inconceivable, chances are that you have found one.

The Black Swan Problem

Подняться наверх