Читать книгу DIY Financial Advisor - Vogel Jack R. - Страница 8

Part One
Why You Can Beat the Experts
Chapter 1
Are Experts Trying Too Hard?

Оглавление

“A speculator can always be beset by an unfathomable event – a constellation of unpredictable and unforeseen events – that leads to a disaster that seemingly was impossible, and it's always important to keep this in mind.”

– Victor Niederhoffer, Commenting on the 1997 Asian Crisis1

It took Victor Niederhoffer many years of study and a lot of hard work to become widely known as an expert in financial markets. After graduating from Harvard and receiving his PhD in finance from the University of Chicago, he continued his ascent within academia, teaching at Berkeley for five years. As an academic, he authored numerous research papers on market anomalies and how one might profit from following clever trading strategies.

As Niederhoffer learned more, and became increasingly sophisticated, he sensed an opportunity to use his academic knowledge to make money. Retiring from academia in 1980, he chose to pursue a career as a practitioner in financial markets. His firm, Niederhoffer Investments, was so successful that he caught the notice of investing guru George Soros. Niederhoffer began working with Soros in the 1980s, advising him on commodities and fixed-income trading. Eventually, Soros allocated $100 million to his firm. During the early 1990s, it was rumored in the financial press that Niederhoffer had been generating returns of 30 percent, or more, per year.

In 1996, based on an illustrious track record and a distinguished trading career, Niederhoffer published his personal cookbook, The Education of a Speculator, in which he revealed his approach to trading and making money in the markets. Who couldn't learn from this titan of finance? And he was a titan. When his book hit the shelves, Niederhoffer was among the best-known hedge fund managers in the United States, was at the pinnacle of his profession, and had become known as one of the foremost experts on investing worldwide. Niederhoffer was not only an expert, he was an expert's expert.

And so, in 1997, as a widely respected expert in financial markets, Niederhoffer may have been surprised when he experienced steep losses on a Thai currency bet. But Niederhoffer had experienced volatility before; he just needed to apply his prodigious investing skill and pull yet another rabbit out of a hat. While Niederhoffer had fallen behind during early 1997, his real problems began when he chose a risky strategy to recover from those losses: He began selling out-of-the-money puts on the S&P 500.2

Selling out-of-the-money puts has been likened to picking up nickels in front of a steamroller. You get a little bit of money (the nickel) for the contract, but you agree to purchase a stock at a future price (the steamroller). Everything works so long as the steam roller doesn't accelerate. However, should our steam roller operator drop his sandwich and inadvertently step on the gas (decrease the stock price), you could find yourself in a pressing situation…

This pressing situation can become downright perilous when market prices approach or fall below the put strike price. If you promise to buy a stock for $10 and its price on the open market is $5, you can be sure that your creditors will come to collect. And if you can't honor your promise to fulfill the contract, well, that's when you need to worry about the steamroller.

In late October, Niederhoffer's out-of-the-money November puts were trading at $0.60, but the Asian financial crisis continued to unfold and began to rattle US markets. The value of his puts quadrupled to $2.40, although they were still over 15 percent out of the money. Niederhoffer was confident, stayed the course, and left his position intact (he had come back from worse than this).

The following week, the S&P plunged by 7 percent, and the implied volatility of the puts skyrocketed. The puts were both closer to being “in the money” and had more implied volatility (the market believed the chance of them ending in the money was greater). Each of these effects made them more expensive. With this put valuation double-whammy, the value of Niederhoffer's puts exploded, which was very bad, since Niederhoffer had sold them. In just over a week's time, Niederhoffer's short position had moved against him by a factor of 25 times or more. This extreme move proved to be too much, even for the master. Shortly thereafter, Niederhoffer had a margin call that he could not meet; his fund's account had gone bankrupt.3Cue the steamroller.

How can it be that Victor Niederhoffer – a noted academic, a respected financial expert, a lion on Wall Street, and a financial press darling – could bankrupt his fund by pursuing a volatile options strategy that first year business school students are cautioned against as being too risky? And what did this say about Niederhoffer's expertise?

Some might argue that once Niederhoffer took losses on his Thai currency bet, his incentives changed and affected his perspective. Facing such losses, perhaps this risky option strategy seemed like a reasonable response. Perhaps it was at this point that Niederhoffer became a slave to his emotions, and therefore ceased to be an expert. Perhaps he simply believed in his innate abilities. Perhaps he just wanted to take on more risk. We will never know.

Yet we rely on experts like Niederhoffer because they are supposed to have superior knowledge! They, given their expert credentials, should reach the right conclusion more often than we, the nonexperts. Once Niederhoffer went bust, surely his expert credentials were revoked by the masses and relegated to nonexpert status, right?

Mustafa Zaida, a professional investor who ran a European hedge fund, apparently didn't think so. In 2002, Zaida seeded a new offshore fund called the Matador Fund, with Niederhoffer directing the trading activities. Zaida reportedly commented, “He's definitely learned his lesson.” It's hard to know exactly what Zaida's thinking was here, but he clearly believed Niederhoffer still maintained at least some degree of expertise.

The Matador Fund performed well initially, compounding at high rates for several years and growing to $350 million. Then in 2007, during the credit crisis, Matador reportedly lost more than 75 percent of its value. As had happened in 1997, Niederhoffer's account was liquidated. He had “blown up” for the second time in about a decade.4 And while these episodes were highly public, there are less public rumors that Niederhoffer blew up a third time, although we don't know whether to give much credence to such rumors.

Regardless, for fairly extended periods of time, Niederhoffer definitely appeared to be an expert; he generated high returns, seemingly without excessive downside risk. But did he eliminate the possibility of extreme downside outcomes? No. This was emphatically not the case, as he empirically demonstrated his ability to be steamrolled, not once, but twice.

Some might argue that if Niederhoffer told investors, “You may lose all your money pursuing this strategy, but it will give you high returns,” then they were not really relying on his expertise to protect them from bankruptcy. But perhaps this is beside the point. If you are aware of a strategy that compounds at 30 percent, but you know that every few years there will be a year when you lose all of your money, then that is not a strategy worth pursuing. Any expert who recommends such a strategy should not be considered an expert in financial matters.

Of course, there is an alternative explanation here. Maybe Niederhoffer wasn't an expert at all. Maybe Niederhoffer just chose risky strategies that made him look like a genius while they were working, but when he blew up, he demonstrated that he wasn't doing anything special at all. The emperor was revealed to have no clothes. All the fancy academic pedigrees, the studies and papers, the published book, the high returns – in short, all the things that made Niederhoffer an “expert,” were perhaps really just an illusion. Perhaps there really was no “expertise” involved, whatsoever. Certainly, after several bankruptcies, that conclusion seems reasonable.

Of course, this story is not meant to pick on Niederhoffer. Like all experts, Niederhoffer is only human. But as we will highlight over the next few chapters, humans are systematically flawed. And so if humans are systematically flawed, why do we still rely on experts for all of our most important decisions?

Why Do We Rely on Experts?

“If you do fundamental trading, one morning you feel like a genius, the next day you feel like an idiot…by 1998 I decided we would go 100 % models…we slavishly follow the model. You do whatever it [the model] says no matter how smart or dumb you think it is. And that turned out to be a wonderful business.”

– Jim Simons, Founder, Renaissance Technologies5

Let's start off by examining our coauthor, Wes Gray, a person many would consider an “expert.” In fact, in many respects, Wes is eerily similar to Vic Niederhoffer. Wes graduated from an uber-prestigious undergraduate business program at the Wharton School of the University of Pennsylvania and earned an MBA and a PhD in finance from the University of Chicago – sound familiar? Well, it should: This is essentially the same academic training as Vic Niederhoffer.

Upon completion of his PhD, Wes entered academia and spent four years as a full-time tenure-track professor. Wes resigned his post as a full-time academic because he raised almost $200 million in assets from a multibillion-dollar family office and a handful of other ultra-high-net-worth families. This is all uncannily similar to how Niederhoffer started his career. Vic also did his time as a professor, and then left academia after a billionaire (i.e., Soros) gave him a large slug of capital. Let's hope the similarity in the stories between Vic and Wes ends at this stage. The last thing Wes wants to do is blow up multiple asset management firms and lose investor capital. He is also deathly afraid of steamrollers.

Clearly, some people believe Wes is an “expert” and are willing to let him manage a large amount of capital without a multi-decade track record. But why might investors' future experiences differ between Vic and Wes? On paper, the two Chicago finance PhDs are virtually the same. It has been said that the definition of insanity is doing the same thing over and over again and expecting a different outcome. So should we avoid an expert like Wes because he is essentially a carbon copy of Vic?

We think the key difference between Wes and Vic is not related to their financial expertise. The difference is related to their skepticism with regard to their own expertise. On most discretionary, day-to-day aspects of investing, for example, picking individual stock picks or the direction of interest rates, Wes believes firmly that he is completely wrong almost all of the time, whereas Vic believed he could master the markets. And while an expert with no faith in his or her ability sounds counterintuitive, it is actually invaluable because this approach to being an expert minimizes the chance for overconfidence. In fact, Wes has established internal firm structures to ensure that he is reminded on a frequent basis that he is a terrible expert in this sense. But why would an expert systematically convince himself that he is not an expert? The reason Wes engages in this peculiar behavior is explained in a quote often attributed to Mark Twain, “It ain't what we don't know that causes us problems; it's what we know for sure that simply ain't so.”

An expert, or any market participant, must acknowledge his own fallibility and must constantly remind himself why he is flawed. This is very difficult to do consistently, since our natural inclination is to believe we are better than average. Unfortunately, on average, we are only going to be average. The ability to question one's own convictions, even when they are firmly held, turns out to be a very useful thing in investing.

The next example highlights how our minds can tell us something with 100 percent confidence, when in fact, what our mind is telling us is 100 percent incorrect.

Figure 1.1 highlights this point.6 Stare at box A and box B in the figure. If you are a human being you will identify that box A is darker than box B.


Figure 1.1 Ed Adelson Checkerboard Illusion


Then ask yourself:

“How much would I bet that A is darker than B?” Would you bet $5? $20? $100?

Or perhaps you would borrow money from a bank, and leverage your bet up 10 times and bet $1,000,000 on this bet. Why not, right? It is a guarantee.

We know how a human approaches this question, but how does a computer think about this question? A computer identifies the red-green-blue (RGB) values for a pixel in box A and the RGB values for a pixel in B. Next, the computer tabulates the results: 120-120-120 for box A; 120-120-120 for box B. Finally, the computer compares the RGB values of the pixel in A and the pixel in B, identifies a match, and concludes that box A and box B are the exact same color. The results are clear to the computer.

So which is it? After taking into consideration the results from the computer algorithm, would you still consider A darker than B? We don't know about you, but we still think A looks darker than B – call us crazy. But then that's what makes us human – we aren't perfect.

The sad reality is the computer is correct, and our perception is wrong. Our mind is being fooled by an illusion created by a vision scientist at MIT, Professor Ed Adelson. Dr. Adelson exploits local contrast between neighboring checker squares, and the mind's perception of the pillar as casting a shadow. The combination creates a powerful illusion that tricks every human mind. The human mind is, as succinctly stated by Duke Psychology Professor Dan Ariely, “predictably irrational.”

That may seem to be a strong statement. Perhaps the illusion as revealed in Figure 1.2 has convinced you that our minds may not be perfect in certain isolated settings (yes, the parallel bars are the same color from top to bottom). Or perhaps it has only persuaded you to believe that while a subset of the population may be flawed, you still possess a perfectly rational and logical mind. Don't be too sure, as a well-established body of academic literature in psychology demonstrates conclusively that humans are prone to poor decision-making across a broad range of situations.


Figure 1.2 Ed Adelson Checkerboard Illusion Answer


But what about experts? Surely, experts are beyond the grip of such cognitive bias? We often assume that professionals with years of experience and expertise in a particular field are better equipped and incentivized to make unbiased decisions. Unfortunately for experts, and for those who rely on them, the academic evidence is unequivocal: systematic decision-making, which relies on models, outperforms discretionary decision-making, or experts. We will come back to this point in a moment, but first let's discuss some other reasons experts might not always provide flawless advice.

What Are the Experts' Incentives?

When paying a financial expert to manage your money, a good question to ask is the following: What are the experts' incentives? This is important to know, because even if the expert has true knowledge about financial markets, misaligned incentives can destroy an edge the expert has, or make the expert look better than he really is. Here are a few examples of when experts' incentives might not be properly aligned:

Focusing on short-term vs. long-term results. Consider a financial expert creating a value strategy with an assumed “edge,” or ability to beat the market in the long run. This expert can decide to invest in 200 of her best stock ideas or 50 of her best stock ideas. The expert faces a trade-off between these two approaches. On one hand, the expert knows that, over the long-haul, buying the cheapest 50 value stocks will be a better risk-adjusted bet than the 200-stock portfolio, since the larger portfolio would be dilutive to performance in the long run. On the other hand, the expert also understands that the 50-stock portfolio has a higher chance of losing to a standard benchmark in a given year, which will could cause her to lose clients in that year. The expert, who assumes, correctly, that most investors focus on short-term results, will opt for a 200-stock portfolio in order to minimize downside risk (and retain clients), and thus, will create a suboptimal product that doesn't fully leverage her expertise. In effect, the expert is indeed an expert, but there is an incentive alignment problem between the expert and investors that negates the benefits of her expertise.

Exploiting authority to generate business. Let's say we have two financial experts. One expert shows up in a pair of jeans and a sweatshirt and states that simply investing in the S&P 500 from 1927 to 2013 has a return of 9.91 percent on average. The second expert shows up in an Armani suit, with his research team of PhDs (also in suits) behind him, and tells you that with his investment technique, $100 would have grown to $371,452 from 1927 to 2013. “Wow,” you would say, and then ask, “So what are the details of the strategy?” Our straight-talking sweatshirt and jeans expert might say, “Well, you simply buy and hold the S&P 500 Index and reinvest dividends to achieve the 9.91 percent return.” However, our Armani-suited PhD squad may respond with the following: “Our strategy is proprietary, is built off of 30 years of research by 15 PhDs, and seeks to dynamically allocate to certain sectors of the market, with more weight going toward better-performing securities.” Sounds impressive, but the strategy is the same: Buy and hold the market! Sadly, that is the expert's power over the layman. If you are unable to fully interpret the advice of an expert, you may be beguiled by his overblown rhetoric masquerading as skill. Overall, an investor needs to be aware of experts' incentives to leverage their position of authority. If an expert cannot explain his strategy to you in a simple, understandable way, we recommend walking in the other direction.

Favoring complexity over simplicity. All else equal, a financial expert prefers a more complex model to a simple one. Why? Because complex models allow them to charge higher fees! As we will show later in the book, simple models beat complex models, and they certainly beat human experts. Why would an expert, many of whom are informed of this fact, recommend a complex solution other than for an increased fee? Consider two asset-allocation alternatives: The first option is an “optimized, time-varying, strategic allocation approach, based on years of research,” whereas, the second option is a 50/50 split between stocks and bonds, buy and hold forever. Also consider that both approaches sell for a 1 percent management fee and you have to choose one of the options. Your instinct probably suggests the more advanced version. But why? What if the simpler option is actually superior to the more complex one?

Overall, there are some true experts in the field. We recommend focusing on those experts who have long-term goals, are transparent about their investment strategy, and have an ability to explain their approach in one sentence.

Are Experts Worthless?

To be clear: We are not making the claim that human experts are worthless across all aspects of the decision-making process. Dentists are great at filling cavities, surgeons are quite handy at repairing ACLs, and the right financial advisor can protect us from making expensive mistakes. Experts are critical, but only for certain elements of the decision-making process. To better frame the decision-making problem, we break the decision-making process into three components (see Figure 1.3):

• Research and development (build systems)

• Systematic implementation (implement systems)

• Evidence-based assessment (assess systems)


Figure 1.3 The Decision-Making Process


We would argue that human experts are required for the first and third phases of a decision-making process, which are the research and development phase and the assessment phase, respectively. The crux of our argument is that human experts should not be involved in the second phase of decision-making, or the implementation phase.

During the research and development phase of decision-making, experts build and test new ideas. In this phase, experts are required to create a sensible model. In the second phase – implementation – one should eliminate human involvement and rely on systematic execution. Finally, during the assessment phase of decision-making, one should once again rely on human experts to analyze and assess model performance to make improvements and incorporate lessons learned from the implementation phase.

We look to the real world for insights into how this three-phase decision-making framework might be applied in practice. A great case study exists within the US Marine Corps (USMC), where Wes spent nearly four years as an officer deployed in a variety of combat situations. The USMC relies on “standard operating procedures,” or SOPs, particularly when Marines are in harm's way. SOPs are developed according to the three-step process mentioned above, which is designed to establish the most robust, effective, and systematic decision-making process possible.

One example is the SOP for setting up a defensive position in a combat situation.7 In the first phase of SOP development, experienced combat veterans and expert consultants review past data and lessons from the field to develop a set of rules that Marines will follow when establishing a defensive position. These rules are debated and agreed upon in an environment that emphasizes slow, deliberate, and critical thought. The current rules, or SOP, for a defensive position is summarized by the acronym SAFE:

• Security

• Automatic weapons on avenues of approach

• Fields of fire

• Entrenchment

During the second phase – the implementation phase – of SAFE, Marines in combat are directed to “follow the model,” or adhere to the SOP. The last thing a Marine should do is disregard SOPs in the middle of a firefight, when the environment is chaotic, Marines are tired, and human decisions are most prone to error. Marines are trained from the beginning to avoid “comfort-based” decisions and to follow standard operating procedures. Of course, once the battle is over, Marines in the field will conduct a debrief and send this information back to the experts, who can debate and assess in a calm environment whether the current SOP needs to be changed based on empirical experience gleaned from the field – the third phase. A key principle of this three-step decision-making process is that discretionary experts are required to develop and assess, but execution is made systematic, so as to minimize human error. The Marines, like other critical decision makers, want experts to develop and assess SOPs in a stable training environment. However, the Marines want to implement SOPs systematically when the environment shifts from the training environment to the live battlefield.

The Expert's Hypothesis

The so-called expert's hypothesis, which asserts that experts can outperform models, is intuitive and tells a deceptively compelling story. For example, to most, it seems like common sense that a hedge fund manager with a Harvard MBA and 20 years of work experience at Goldman Sachs can beat a simple model that buys a basket of low P/E stocks. The logic behind this presumption is persuasive, as the expert would seem to possess a number of advantages over the model. The expert can arguably outperform the simple model for the following reasons:

• Experts have access to qualitative information.

• Experts have more data.

• Experts have intuition and experience.

Of course, there are other ways to support the argument that a human expert will beat a simple model, but most of these stories revolve around the same key points already outlined.

The following three arguments, however, underlie the expert's hypothesis, and while they are plausible, they are wrong:

1. Qualitative information increases forecast accuracy.

2. More information increases forecast accuracy.

3. Experience and intuition enhance forecast accuracy.

Remarkably, the evidence we will present illustrates that qualitative information, more information, and experience/intuition do not lead to more accurate or reliable forecasts, but instead lead to poorer decision-making. And because this result is so counterintuitive, it makes it that much more important to understand.

Among the hundreds of cases of expert forecasts gone awry, one high-profile example is Meredith Whitney.8 Ms. Whitney is famous for her prescient forecast of the banking crisis that reared its ugly head in late 2008. Public accounts of Ms. Whitney's predictions which were widely observed and discussed during that time period, all suggested that Ms. Whitney was a “genius” after her remarkable call on Citibank's balance sheet blues.

But Ms. Whitney didn't stop there. She outlined her gloomy forecast for the municipal bond market on a December 2010 segment of the prime-time CBS news program 60 Minutes. Ms. Whitney predicted there would be “50 to 100 sizable defaults.” She forcefully reiterated her prediction at the Spring 2012 Grant's Interest Rate Observer Conference, where we observed firsthand the emotional conviction Ms. Whitney felt for her bold prediction.

However, Ms. Whitney's powers of prediction were fleeting. In an article published in September 2012, the Wall Street Journal published a stinging article titled “Meredith Whitney Blew a Call – And Then Some.” The piece was quick to point out that that “there were just five defaults” in the municipal market.9


Конец ознакомительного фрагмента. Купить книгу

1

Mick Winstein, “Victor Niederhoffer after He Lost Everything in the 1997 Asian Crisis,” Smarter Investing (June 3, 2013), http://investing.covestor.com/2013/06/victor-niederhoffer-after-he-lost-everything-in-the-1997-asian-crisis-video.

2

John Cassidy, “The Blow-Up Artist,” New Yorker Magazine (October 15, 2007).

3

Bill Ziemba, “Hedge Fund Risk, Disasters and Their Prevention,” Wilmott magazine (June 2, 2006), http://www.wilmott.com/pdfs/060206_drz.pdf.

4

R. Ziemba and W. Ziemba, Scenarios for Risk Management and Global Investment Strategies (New York: John Wiley & Sons, 2007).

5

“Mathematics Common Sense and Good Luck: My Life and Careers,” MIT Video (December 9, 2010), http://video.mit.edu/watch/mathematics-common-sense-and-good-luck-my-life-and-careers-9644/.

6

Edward Adelson, “Checkershadow Illusion,” accessed February 10, 2014, http://persci.mit.edu/gallery/checkershadow.

7

Marine Rifle Squad, MCRP 3-11.2, Chapter 5.

8

We do not mean to single out Meredith Whitney. The same point can be made with just about any analyst who has shown up on CBNC and expressed a confident and detailed opinion on a forecast.

9

David Weidner, “Meredith Whitney Blew a Call – And Then Some,” Wall Street Journal (September 27, 2012), http://online.wsj.com/news/articles/SB10000872396390444549204578021380172883800.

DIY Financial Advisor

Подняться наверх