Читать книгу The Success Equation - Michael J. Mauboussin - Страница 10
ОглавлениеCHAPTER 3
THE LUCK-SKILL CONTINUUM
IN 2006, TRADINGMARKETS, a company that helps people trade stocks, asked ten Playboy Playmates to select five stocks each. The idea was to see if they could beat the market. The winner was Deanna Brooks, Playmate of the Month in May 1998. The stocks she picked rose 43.4 percent, trouncing the S&P 500, which gained 13.6 percent, and beating more than 90 percent of the money managers who actively try to outperform a given index. Brooks wasn't the only one who fared well. Four of the other ten Playmates had better returns than the S&P 500 while less than a third of the active money managers did.1
Although the exercise was presumably a lighthearted effort at attracting attention, the results raise a serious question: How can a group of amateurs do a better job of picking stocks than the majority of dedicated professionals? You would never expect amateurs to outperform professional dentists, accountants, or athletes over the course of a year. In this case, the answer lies in the fact that investing is an activity that depends to a great deal on luck, especially over a short period of time. In this chapter, I'll develop a simple model that will allow us to take a more in-depth look at the relative contributions of luck and skill. I'll also provide a framework for thinking about extreme outcomes and show how to anticipate the rate of reversion to the mean. A deeper discussion of the continuum between luck and skill can help us to avoid some of the mistakes described in chapters 1 and 2 and to make better decisions.
Sample Size, Not Time
Visualizing the continuum between luck and skill can help us to see where an activity lies between the two extremes, with pure luck on one side and pure skill on the other. In most cases, characterizing what's going on at the extremes is not too hard. As an example, you can't predict the outcome of a specific fair coin toss or payoff from a slot machine. They are entirely dependent on chance. On the other hand, the fastest swimmer will almost always win the race. The outcome is determined by skill, with luck playing only a vanishingly small role (for example, the fastest swimmer could contract food poisoning in the middle of a match and lose). But the extremes on the continuum capture only a small percentage of what really goes on in the world. Most of the action is in the middle, and having a sense of where an activity lies will provide you with an important context for making decisions.
As you move from right to left on the continuum, luck exerts a larger influence. It doesn't mean that skill doesn't exist in those activities. It does. It means that we need a large number of observations to make sure that skill can overcome the influence of luck. So Deanna Brooks would have to pick a lot more stocks and outperform the pros for a lot longer before we'd be ready to say that she is skillful at picking stocks. (The more likely outcome is that her performance would revert to the mean and look a lot more like the average of all investments.) In some endeavors, such as selling books and movies, luck plays a large role, and yet best-selling books and blockbuster movies don't revert to the mean over time. We'll return to that subject later to discuss why that happens. But for now we'll stick to areas where luck does even out the results over time.
When skill dominates, a small sample is sufficient to understand what's going on. When Roger Federer was in his prime, you could watch him play a few games of tennis and know that he was going to win against all but one or two of the top players. You didn't have to watch a thousand games. In activities that are strongly influenced by luck, a small sample is useless and even dangerous. You'll need a large sample to draw any reasonable conclusion about what's going to happen next. This link between luck and the size of the sample makes complete sense, and there is a simple model that demonstrates this important lesson. Figure 3-1 shows a matrix with the continuum on the bottom and the size of the sample on the side. In order to make a sound judgment, you must choose the size of your sample with care.
We're naturally inclined to believe that a small sample is representative of a larger sample. In other words, we expect to see what we've already seen. This fallacy can run in two directions. In one direction, we observe a small sample and believe, falsely, that we know what all of the possibilities look like. This is the classic problem of induction, drawing general conclusions from specific observations. We saw, for instance, that small schools produced students with the highest test scores. But that didn't mean that the size of the school had any influence on those scores. In fact, small schools also had students with the lowest scores.
FIGURE 3-1
Sample size and the luck-skill continuum
Source: Analysis by author.
In many situations we have only our observations and simply don't know what's possible.2 To put it in statistical terms, we don't know what the whole distribution looks like. The greater the influence luck has on an activity, the greater our risk of using induction to draw false conclusions. To put this another way, think of an investor who trades successfully for a hundred days using a particular strategy. He will be tempted to believe that he has a fail-safe way to make money. But when the conditions of the market change, his profits will turn to losses. A small number of observations fails to reveal all of the characteristics of the market.
We can err in the opposite direction as well, unconsciously assuming that there is some sort of cosmic justice, or a scorekeeper in the sky who will make things even out in the end. This is known as the gambler's fallacy. Say you're watching a coin being tossed. Heads comes up three times in a row. What do you think the next toss will show? Most people will say tails. It feels as if tails is overdue. But it's not. There is a 50-50 chance of both heads and tails on every toss, and one flip has no influence on any other. But if you toss the coin a million times, you will, in fact, see about half a million heads and half a million tails. Conversely, in the universe of the possible, you might see heads come up a hundred times in a row if you toss the coin long enough.
It turns out that many things in nature do even out, which is why we have evolved to think that all things balance out. Several days of rain are likely to be followed by fair weather. But in cases near the side of the continuum where outcomes are independent of one another, or close to being so, the gambler's fallacy is alive and well. This influence casts its net well beyond naive gamblers and ensnares trained scientists, too.3
When you're attempting to select the correct size of a sample to analyze, it's natural to assume that the more you allow time to pass, the larger your sample will be. But the relationship between the two is much more complicated than that. In some instances, a short amount of time is sufficient to gather a relatively large sample, while in other cases a lot of time can pass and the sample will remain small. You should consider time as independent from the size of the sample.
Evaluation of competition in sports illustrates this point. In U.S. men's college basketball, a game lasts forty minutes and each team takes possession of the ball an average of about sixty-five times during the game. Since the number of times each team possesses the ball is roughly equal, possession has little to do with who wins. The team that converts possessions into the most points will win. In contrast, a men's college lacrosse game is sixty minutes long but each team takes possession of the ball only about thirty-three times. So in basketball, each team gets the ball almost twice a minute, while in lacrosse each team gets the ball only once every couple of minutes or so. The size of the sample of possessions in basketball is almost double that of lacrosse. That means that luck plays a smaller role in basketball, and skill exerts a greater influence on who wins. Because the size of the sample in lacrosse is smaller and the number of interactions on the field so large, luck has a greater influence on the final score, even though the game is longer.4
The Two-Jar Model
Imagine that you have two jars filled with balls.5 Each ball has a number on it. The numbers in one jar represent skill, while the numbers in the other represent luck. Higher numbers are better. You draw one ball from the jar that represents skill, one from the jar that represents luck, and then add them together to get a score. Figure 3-2 shows a case where the numbers for skill and luck follow a classic bell curve. But the numbers can follow all sorts of distributions. The idea is to fill each jar with numbers that capture the essence of the activity you are trying to understand.
FIGURE 3-2
A simple example of skill and luck distributions
Source: Analysis by author.
To represent an activity that's completely dependent on skill, for instance, we can fill the jar that represents luck with zeros. That way, only the numbers representing skill will count. If we want to represent an activity that is completely dependent on luck, such as roulette, we fill the other jar with zeros. Most activities are some blend of skill and luck.
Here's a simple example. Let's say that the jar representing skill has only three numbers, −3, 0, and 3, and that the jar representing luck has −4, 0, and 4. We can easily list all of the possible outcomes, from −7, which reflects poor skill and bad luck, to 7, the combination of excellent skill and good luck. (See figure 3-3.) Naturally, anything real that we model would be vastly more complex than this example, but these numbers suffice to make several crucial points.
It is possible to do poorly in an activity even with good skill if the influence of luck is sufficiently strong and the number of times you draw from the jars is small. For example, if your level of skill is 3 but you draw a −4 from the jar representing luck, then bad luck trumps skill and you score −1. It's also possible to have a good outcome without being skilled. Your skill at −3 is as low as it can be, but your blind luck in choosing 4 gives you an acceptable score of 1.
FIGURE 3-3
Simple jar model
Source: Analysis by author.
Of course, this effect goes away as you increase the size of the sample. Think of it this way: Say your level of skill is always 3. You draw only from the jar representing luck. In the short run, you might pull some numbers that reflect good or bad luck, and that effect may persist for some time. But over the long haul, the expected value of the numbers representing luck is zero as your draws of balls marked 0, 4, and −4 even out. Ultimately your level of skill, represented by the number 3, will come through.6
The Paradox of Skill—More Skill Means Luck Is More Important
This idea also serves as the basis for what I call the paradox of skill. As skill improves, performance becomes more consistent, and therefore luck becomes more important. Stephen Jay Gould, a renowned paleontologist at Harvard, developed this argument to explain why no baseball player in the major leagues has had a batting average of .400 or more for a full season since Ted Williams hit .406 in 1941 while playing for the Boston Red Sox.7 Gould started by considering some common explanations. The first was that night games, distant travel, diluted talent, and better pitching had all impeded batters. While those factors may have had some influence on the results, none are sufficient to explain the failure to achieve a .400 average. Another possibility was that Williams was not only the best hitter of his era, but that he was better than any other hitter to come along since then. Gould quickly dismissed that argument by showing that in every sport where it can be measured, performance had steadily improved over time. Williams, as good as he was in his era, would certainly not stand out as much if he were to be compared with players today.
At first glance, that may seem contradictory. But the improvement in performance since 1941 may not be as apparent in baseball as it is in other sports because batting average has remained relatively stable, at around .260–.270, for decades. But the stable average masks two important developments. First, batting average reflects not individual skill but rather the interaction between pitchers and hitters. It's like an arms race. As pitchers and hitters both improve their skills on an absolute basis, their relative relationship stays static. Although both pitchers and hitters today are some of the most skillful in history, they have improved in lockstep.8 But that lockstep was not ordained entirely by nature. The overseers of Major League Baseball have had a hand in it. In the late 1960s, for example, when it appeared that pitchers were getting too good for the batters, they changed the rules by lowering the pitcher's mound by five inches and by shrinking the strike zone, allowing the hitters to do better. Thus the rough equilibrium between pitchers and hitters reflects the natural evolution of the players as well as a certain amount of intervention from league officials.
Gould argues that there are no more .400 hitters because all professional hitters have become more skillful, and therefore the difference between the best and worst has narrowed. Training has improved greatly in the last sixty years, which has certainly had an effect on this convergence of skills. In addition, the leagues began recruiting players from around the world, greatly expanding the pool of talent. Hungry kids from the Dominican Republic (Sammy Sosa) and Mexico (Fernando Valenzuela) brought a new level of skill to the game. At the same time, luck continued to play a meaningful role in determining an individual player's batting average. Once the pitcher lets go of a ball, it is still hard to predict whether the batter, however skilled, will connect with it and what will happen if he does.
The key idea, expressed in statistical terms, is that the variance of batting averages has shrunk over time, even as the skill of the hitters has improved. Figure 3-4 shows the standard deviation and coefficient of variation for batting averages by decade since the 1870s. Variance is simply standard deviation squared, so a reduction in standard deviation corresponds to a reduction in variance. The coefficient of variation is the standard deviation divided by the average of all the hitters, which provides an effective measure of how scattered the batting averages of the individual players are from the league average. The figure shows that batting averages have converged over the decades. While Gould focused on batting average, this phenomenon is observable in other relevant statistics as well. For example, the coefficient of variation for earned run average, a measure of how many earned runs a pitcher allows for every nine innings pitched, has also declined over the decades.9
FIGURE 3-4
Reduction in standard deviation in Major League Baseball batting averages
Source: Analysis by author.
This decline in variance explains why there are no more .400 hitters. Since everybody gets better, no one wins quite as dramatically. In his day, Williams was an elite hitter and the variance was large enough that he could achieve such an exalted average. Today, the variance has shrunk to the point that elite hitters have only a tiny probability of matching his average. If Williams played today and had the same level of skill that he had in 1941 relative to other players, his batting average would not come close to .400.
Hitting a baseball in the major leagues is one of the hardest tasks in all of sports. A major league pitcher throws a baseball at speeds of up to one hundred miles an hour with the added complication of sideways or downward movement as it approaches the plate. The paradox of skill says that even though baseball players are more skillful than ever, skill plays a smaller role in determining batting averages than it did in the past. That's because the difference between success and failure for the batter has come to depend on a mistake of only fractions of an inch in where he places his bat or thousandths of a second in his timing in beginning an explosive and nearly automatic swing. But because everyone is uniformly more skillful, the vagaries of luck are more important than ever.
You can readily see how the paradox of skill applies to other competitive activities. A company can improve its absolute performance, for example, but will remain at a competitive parity if its rivals do the same.10 Or if stocks are priced efficiently in the market, luck will determine whether an investor correctly anticipates the next price move up or down. When everyone in business, sports, and investing copies the best practices of others, luck plays a greater role in how well they do.
For activities where little or no luck is involved, the paradox of skill leads to a specific and testable prediction: over time, absolute performance will steadily approach the point of physical limits, such as the speed with which one can run a mile. And as the best competitors confront those limits, the relative performance of the participants will converge. Figure 3-5 shows this idea graphically. As time goes on, the picture evolves from one that looks like the left side to one that looks more like the right side. The average of the distribution of skill creeps toward peak performance and the slope of the right tail gets steeper as the variance shrinks, implying results that are more and more alike.
We can test this prediction to see if it is true. Consider running foot races, especially the marathon, one of the oldest and most popular sports events in history. The race covers 26 miles and 385 yards. It was introduced as an original Olympic event in 1896, roughly fifteen hundred years after—legend has it—Pheidippides ran to his home in Athens from the battlefield of Marathon, where his countrymen had just defeated the Persians. When Pheidippides arrived, he proclaimed, “We have won!” He then dropped dead.
FIGURE 3-5
The paradox of skill leads to clustered results
Source: Analysis by author.
John Brenkus, host of Sports Science for the television network ESPN, speculates on the limits of human performance in his book The Perfection Point. After giving consideration to a multitude of physical factors, he concludes that the fastest time that a human can ever run a marathon is 1 hour, 57 minutes, and 58 seconds.11 As I write this, the world record, held by Patrick Makau of Kenya, is 2 hours, 3 minutes, and 38 seconds. So Makau's record is 5 minutes and 40 seconds slower than what is theoretically possible, according to Brenkus.
Figure 3-6 shows two results from each men's Olympic marathon from 1932 to 2008. The first is the time of the winner. That time dropped by about twenty-five minutes during those years. This translates into a pace that is almost one minute faster each mile, which (as you runners out there know) is a substantial increase, even considering that it was achieved over three-quarters of a century. The figure also shows the difference between the time of the gold medalist and the man who came in twentieth. As the paradox of skill predicts, that time has narrowed from close to forty minutes in 1932 to around nine minutes in 2008. So as everyone's skill has improved, the performance of the person who finished in twentieth place and the winner has converged.
FIGURE 3-6
Men's Olympic marathon times and the paradox of skill
Source: www.olympicgamesmarathon.com and analysis by author.
The two-jar model shows that luck can overwhelm skill in the short term if the variance of the distribution of luck is larger than the variance of the distribution of skill. In other words, if everyone gets better at something, luck plays a more important role in determining who wins. Let's return to that model now.
The Ingredients of an Outlier
Note that the extreme values in the two-jar model are −7 and 7. The only way to get those values is to combine the worst skill with the worst luck or the best skill with the best luck. Since the poorest performers generally die off in a competitive environment, we'll concentrate on the best. The basic argument is easy to summarize: great success combines skill with a lot of luck. You can't get there by relying on either skill or luck alone. You need both.
This is one of the central themes in Malcolm Gladwell's book, Outliers. As one of his examples, Gladwell tells the story of Bill Joy, the billionaire cofounder of Sun Microsystems, who is now a partner in the venture capital firm Kleiner Perkins. Joy was always exceptionally bright. He scored a perfect 800 on the math section of the SAT and entered the University of Michigan at the age of sixteen. To his good fortune, Michigan had one of the few computers in the country that had a keyboard and screen. Everywhere else, people who wanted to use a computer had to feed punched cards into the machine to get it to do anything (or more likely, wait for a technician to do it). Joy spent an enormous amount of time learning to write programs in college, giving him an edge when he entered the PhD program for computer science at the University of California, Berkeley. By the time he had completed his studies at Berkeley, he had about ten thousand hours of practice in writing computer code.12 But it was the combination of his skill and good luck that allowed him to start a software company and accrue his substantial net worth. He could have been just as smart and gone to a college that had no interactive computers. To succeed, Joy needed to draw winning numbers from both jars.
Gladwell argues that the lore of success too often dwells on an individual's personal qualities, focusing on how grit and talent paved the way to the top. But a closer examination always reveals the substantial role that luck played. If history is written by the winners, history is also written about the winners, because we like to see clear cause and effect. Luck is boring as the driving force in a story. So when talking about success, we tend to place too much emphasis on skill and not enough on luck. Luck is there, though, if you look. A full account of these stories of success shows, as Gladwell puts it, that “outliers reached their lofty status through a combination of ability, opportunity, and utter arbitrary advantage.”13 This is precisely what the two-jar model demonstrates.
Outliers show up in another way. Let's return to Stephen Jay Gould, baseball, and the 1941 season. Not only was that the year that Ted Williams hit .406, it was the year that Joe DiMaggio got a hit in fifty-six straight games. Of the two feats, DiMaggio's streak is considered the more inviolable.14 While no player has broached a .400 batting average since Williams did, George Brett (.390 in 1980) and Rod Carew (.388 in 1977) weren't far off. The closest that anyone has approached to DiMaggio's streak was in 1978, when Pete Rose hit safely in forty-four games, only 80 percent of DiMaggio's record.
“Long streaks are, and must be, a matter of extraordinary luck imposed on great skill,” wrote Gould.15 That's exactly how you generate a long streak with the two jars. Here's a way to think about it: Say you draw once from the jar representing skill and then draw repeatedly from the other. The only way to have a sustained streak of success is to start with a high value for skill and then be lucky enough to pull high numbers from then on to represent your good luck. As Gould emphasizes, “Long hitting streaks happen to the greatest players because their general chance of getting a hit is so much higher than average.”16 For instance, the probability that a .300 hitter gets three hits in a row is 2.7 percent (= .33) while the probability that a .200 hitter gets three hits in a row is 0.8 percent (= .23). Good luck alone doesn't carry the day. While not all great hitters have streaks, all of the records for the longest streaks are held by great hitters. As a testament to this point, the players who have enjoyed streaks of hits in thirty or more consecutive games have a mean batting average of .303, well above the league's long-term average.17
Naturally, this principle applies well beyond baseball. In other sports, as well as the worlds of business and investing, long winning streaks always meld skill and luck. Luck does generate streaks by itself, and it's easy to confuse streaks due solely to luck with streaks that combine skill and luck. But when there are differences in the level of skill in a field, the long winning streaks go to the most skillful players.
Reversion to the Mean and the James-Stein Estimator
Using the two jars also provides a useful way to think about reversion to the mean, the idea that an outcome that is far from the average will be followed by an outcome that is closer to the average. Consider the top four combinations (−3 skill, 4 luck; 3 skill, 0 luck; 0 skill, 4 luck; and 3 skill, 4 luck) which sum to 15. Of the total of 15, skill contributes 3 (−3, 3, 0, 3) and luck contributes 12 (4, 0, 4, 4). Now, let's say you hold on to the numbers representing skill. Your skill remains the same over the course of this exercise. Now you return the numbers representing luck to the jar and draw a new set of numbers. What would you expect the new sum to be? Since your level of skill remains unchanged at 3 and the expected value of luck is zero, the expected value of the new outcome is 3. That is the idea behind reversion to the mean.
We can do the same exercise for the bottom four outcomes (−3 skill, −4 luck; 0 skill, −4 luck; −3 skill, 0 luck; and 3 skill, −4 luck). They add up to −15, and the contribution from skill alone is −3. Here, also, with a new draw the expected value for luck is zero, so the total goes from −15 to an expected value of −3. In both cases, skill remains the same but the large contributions from either good luck or bad luck shrink toward zero.
While most people seem to understand the idea of reversion to the mean, using the jars and the continuum between luck and skill can add an important dimension to this thinking. In the two-jar exercise, you draw only once from the jar representing skill; after that, your level of skill is assumed to remain the same. This is an unrealistic assumption over a long period of time but very reasonable for the short term. You then draw from the jar representing luck, record your value, and return the number to the jar. As you draw again and again, your scores reflect stable skill and variations in luck. In this form of the exercise, your skill ultimately determines whether you wind up a winner or a loser.
The position of the activity on the continuum defines how rapidly your score goes toward an average value, that is, the rate of reversion to the mean. Say, for example, that an activity relies entirely on skill and involves no luck. That means the number you draw for skill will always be added to zero, which represents luck. So each score will simply be your skill. Since the value doesn't change, there is no reversion to the mean. Marion Tinsley, the greatest player of checkers, could win all day long, and luck played no part in it. He was simply better than everyone else.
Now assume that the jar representing skill is filled with zeros, and that your score is determined solely by luck; that is, the outcomes will be dictated solely by luck and the expected value of every incremental draw for skill will be the same: zero. So every subsequent outcome has an expected value that represents complete reversion to the mean. In activities that are all skill, there is no reversion to the mean. In activities that are all luck, there is complete reversion to the mean. So if you can place an activity on the luck-skill continuum, you have a sound starting point for anticipating the rate of reversion to the mean.
In real life, we don't know for sure how skill and luck contribute to the results when we make decisions. We can only observe what happens. But we can be more formal in specifying the rate of reversion to the mean by introducing the James-Stein estimator with a focus on what is called the shrinking factor.18 This construct is easiest to understand by using a concrete example. Say you have a baseball player, Joe, who hits .350 for part of the season, when the average of all players is .265. You don't really believe that Joe will average .350 forever because even if he's an above-average hitter, he's likely been the beneficiary of good luck recently. You'd like to know what his average will look like over a longer period of time. The best way to estimate that is to reduce his average so that it is closer to .265. The James-Stein estimator includes a factor that tells you how much you need to shrink the .350 while Joe's average is high so that his number more closely resembles his true ability in the long run. Let's go straight to the equation to see how it works:
Estimated true average = Grand average + shrinking factor (observed average − grand average)
The estimated true average would represent Joe's true ability. The grand average is the average of all of the players (.265), and the observed average is Joe's average during his period of success (.350). In a classic article on this topic, two statisticians named Bradley Efron and Carl Morris estimated the shrinking factor for batting averages to be approximately .2. (They used data on batting averages from the 1970 season with a relatively small sample, so consider this as illustrative and not definitive.)19 Here is how Joe's average looks using the James-Stein estimator:
Estimated true average = .265 + .2 (.350 − .265)
According to this calculation, Joe is most likely going to be batting .282 for most of the season. The equation can also be used for players who have averages below the grand average. For example, the best estimate of true ability for a player who is hitting only .175 for a particular stretch is .247, or .265 + .2 (.175 − .265).
For activities that are all skill, the shrinking factor is 1.0, which means that the best estimate of the next outcome is the prior outcome. When Marion Tinsley was playing checkers, the best guess about who would win the next game was Marion Tinsley. If you assume that skill is stable in the short term and that luck is not a factor, this is the exact outcome that you would expect.
For activities that are all luck, the shrinking factor is 0, which means that the expected value of the next outcome is the mean of the distribution of luck. In most American casinos, the mean distribution of luck in the game of roulette is 5.26 percent, the house edge, and no amount of skill can change that. You may win a lot for a while or lose a lot for a while, but if you play long enough, you will lose 5.26 percent of your money. If skill and luck play an equal role, then the shrinking factor is 0.5, halfway between the two. So we can assign a shrinking factor to a given activity according to where that activity lies on the continuum. The closer the activity is to all skill, the closer the factor is to 1. The larger the role that luck plays, the closer the factor is to zero. We will see a specific example of how these shrinking factors correlate with skill in chapter 10.
The James-Stein estimator can be useful in predicting the outcome of any activity that combines skill and luck. To use one example, the return on invested capital for companies reverts to the mean over time. In this case, the rate of reversion to the mean reflects a combination of a company's competitive position and its industry. Generally speaking, companies that deal in technology (and companies whose products have short life cycles) tend to revert more rapidly to the mean than established companies with stable demand for their well-known consumer products. So Seagate Technology, a maker of hard drives for computers, will experience more rapid reversion to the mean than Procter & Gamble, the maker of the best-selling detergent, Tide, because Seagate has to constantly innovate, and even its winning products have a short shelf life. Put another way, companies that deal in technology have a shrinking factor that is closer to zero.
Similarly, investing is a very competitive activity, and luck weighs heavily on the outcomes in the short term. So if you are using a money manager's past returns to anticipate her future results, a low shrinkage factor is appropriate. Past performance is no guarantee of future results because there is too much luck involved in investing.
Understanding the rate of reversion to the mean is essential for good forecasting. The continuum of luck and skill, as our experience with the two jars has shown, provides a practical way to think about that rate and ultimately to measure it.
So far, I have assumed that the jars contain numbers that follow a normal distribution, but in fact, distributions are rarely normal. Furthermore, the level of skill changes over time, whether you're talking about an athlete, a company, or an investor. But using jars to create a model is a method that can accommodate those different distributions. Chapters 5 and 6 will examine how skill changes over time and what forms luck can take.
Visualizing luck and skill as a continuum provides a simple concept that can carry a lot of intellectual freight. It allows us to understand when luck can make your level of skill irrelevant, especially in the short term, as we saw with the Playboy Playmates. It allows us to think about extreme performance, as in the cases of Bill Joy and Joe DiMaggio. And makes it possible for us to calibrate the rate of reversion to the mean, as we did with batting averages. Each of these ideas is essential to making intelligent predictions.
Chapter 4 looks at techniques for placing activities on the continuum. It's time to make the ideas from the continuum operational.