Читать книгу Smart Swarm: Using Animal Behaviour to Organise Our World - Don Tapscott, Peter Miller, Alex Tapscott - Страница 8

2 HONEYBEES Making Smart Decisions

Оглавление

Appledore Island is a tough place for honeybees.

Anchored in the Atlantic off the coast of southern Maine, the rocky, wind-blown island is barely a half-mile long, with hardly any trees, which the bees need for nest building. In fact, you might describe the island as a kind of bee Alcatraz, which makes it an ideal place to observe their behavior under controlled conditions.

A few summers ago, biologists Thomas Seeley of Cornell University and Kirk Visscher of the University of California at Riverside ferried a half-dozen colonies of honeybees to Appledore, which is home to the Shoals Marine Laboratory run by Cornell. For nearly a decade, Seeley and Visscher have been studying a fascinating example of what they call “animal democracy.” How do several thousand honeybees, they want to know, put aside their differences to reach a decision as a group?

The focus of their research has been honeybee “house hunting.” In late spring or early summer, as a large hive outgrows its nest, the group normally divides. The queen and roughly half of the bees fly off in a swarm to create a new colony, leaving behind a daughter queen in the old nest. There may be fifteen thousand bees in the swarm, which typically clusters on a tree branch, while several hundred scout bees search the neighborhood for new real estate. Although the queen’s presence is important to bees in the swarm, she plays no role in picking a new nest site. That task is delegated to the scouts, who do their jobs without direction from a leader.

When a scout buzzes off into the countryside, she’s looking for just the right dwelling place (I say she, because worker honeybees are all females). It must be well off the ground, with a small entrance hole facing south and enough room inside to allow the colony to grow. If she finds such a spot—a hollow in a tree would be perfect—she returns to the swarm and reports her discovery by doing a waggle dance. This dance, which resembles the one forager bees do when they locate a new patch of flowers, contains a code telling others how to find the site. Some of the scouts that see her dance will then go examine the site for themselves, and, if they agree with her assessment, they’ll return to the swarm and dance in support of the site, too.

This is no trivial question for the bees. As long as the swarm is clinging to the branch, it remains exposed to weather, predators, and other hazards. But once the swarm selects a new home, it won’t move again until next spring. So it has to get it right the first time. If the group selects poorly, the entire colony could perish.

One by one, scouts that have been exploring the neighborhood return to the cluster with news about different locations. Soon there’s a steady stream of bees flying between the cluster and a dozen or more potential nest sites, as more and more scouts get involved in the selection process. Eventually, after enough scouts have inspected enough sites, it becomes clear that traffic at one site is much greater than that at any other, and a decision is reached. The bees in the main cluster warm up their wings and fly off together to the chosen site—which almost always turns out to be the best one.

Facing a life-or-death situation, in other words, a honeybee swarm engages in a complex decision-making process involving multiple, simultaneous interactions between hundreds of individuals with no leadership at all—exactly the kind of chaotic, unpredictable enterprise that, if attempted by people under stress, would almost certainly lead to disaster. Yet the bees almost always make the right choice.

How do they do it?

The Five-Box Test

One spring day in 1949, a young zoologist named Martin Lindauer was observing a swarm of bees near the Zoological Institute in Munich, Germany, when he noticed something odd. Some of the bees, he realized, were doing waggle dances. Ordinarily that meant they were foragers that had found a nice patch of flowers nearby, and they were telling other bees where to go find it. But these dancers weren’t carrying any pollen or nectar, so Lindauer didn’t think they were foragers. What were they up to?

Lindauer’s mentor at the University of Munich, the renowned zoologist Karl von Frisch, had recently figured out that the waggle dance—or “tail-wagging dance” as he called it—was in fact a sophisticated form of communication (he won a Nobel Prize for this research in 1973). When a foraging bee danced, von Frisch had discovered, she wasn’t just advertising a source of food, she was also providing precise directions to locate it. To perform such a dance, a bee would run forward a short distance on the hive’s comb while vibrating her abdomen in a “waggle.” Then she’d return to her starting point in a figure-eight and repeat this over and over, as if reenacting her flight to the flower patch. The length of her dance indicated how far away the food was, and the angle of her dance (relative to vertical) corresponded with the direction of the food (relative to the sun). If a bee danced in a direction thirty degrees to the right of vertical, for example—picture the number 1 on a clock—the flower patch could be found by flying in a direction thirty degrees to the right of the sun. It was an ingenious system, but it had never been linked to house hunting before.

By carefully studying several swarms—sometimes running beneath them as they flew across the Bavarian countryside—Lindauer determined that the bees dancing on the swarm cluster were scouts that had been out searching for a nest site. Some were still powdered with red-brick dust from having explored a hole in a building, or blackened with soot from having checked out a chimney. Just as foraging bees used the waggle dance to share news about food sources, so the scouts were using it to report on potential real estate. At first, many of the scouts danced in different directions, apparently announcing various options. But after some hours, fewer and fewer sites were mentioned until, finally, all the dancers were pointing in the same direction. Soon after that, the swarm lifted off from its bivouac and flew to its new home, which Lindauer was able to locate by reading the code of the dances.

The bees had reached a consensus, he theorized, because the liveliest scouts had persuaded the rest to go along with their choice. They did this by getting rivals to visit their preferred site, where, confronted with the superior qualities of the site, the former competitors simply changed their minds. One by one they were won over, he speculated, and the disagreement went away.

In this respect, at least, Lindauer got it wrong. It wasn’t quite that simple. Researchers have since established that only a small percentage of scouts ever visit more than one site. The group’s decision does not rely on individual scouts changing their minds, but rather on a process that combines the judgments of hundreds of scouts—one that would remain a mystery for fifty years.

That’s where Tom Seeley and Kirk Visscher came in. Beginning in the late 1990s, they picked up where Lindauer left off, this time using video cameras to record every aspect of the swarm’s behavior. They also brought some new ideas about honeybee deliberation. Given the large number of individuals that take part in house hunting, they doubted that bees’ decision making was based on consensus. It just seemed too complicated, like trying to get a large group of friends to agree on which movie to watch. More likely, they figured, the process relied on some form of competition. Instead of trying to work through their differences with one another, scouts dancing on the swarm cluster appeared to be actively lobbying for different sites. It wasn’t a meeting of minds at all, but a race to build up supporters—with the winners taking all.

In that sense, the bees’ system was more like a stock market, in which the value of a security rises or falls according to the collective judgment of the group. Scouts watching another scout dance, like brokers, might be persuaded to do their own research on the site being advertised, and if they liked what they saw, they could buy into the site by dancing for it themselves. If they didn’t like it, they didn’t have to. The more bees that joined in, the greater the likelihood the site would be selected.

But how did the process work, exactly? What were the mechanisms that enabled the bees to choose so accurately?

To find out, Seeley and Visscher conducted a series of experiments. After preparing a swarm for house hunting, they placed five plywood nest boxes an equal distance from the bees on Appledore Island—four representing mediocre choices for a new nest and one that was excellent. What made the fifth box better than the rest was that it offered the bees an ideal amount of living space—about forty quarts, compared to fifteen quarts for the others, which was not enough to store honey, raise brood, and meet the other needs of an expanding colony. To track the bees during the decision-making process, Seeley and Visscher labeled all four thousand individuals in each swarm with tiny numbered disks on their thoraxes and dabs of paint on their abdomens, a tedious process that involved chilling batches of twenty bees at a time to render them docile enough to be handled. But it was worth it in the end, because, when they looked at video tapes of the swarms later, they could tell which bees had visited which nest boxes and which ones had danced for which boxes at the main cluster. The shape of the decision-making process emerged.

The key, it turned out, was the brilliant way the bees exploited their diversity of knowledge—the second major principle of a smart swarm. Just as Deborah Gordon’s ant colonies used self-organization to adjust to changes in the environment, so the honeybees used diversity of knowledge to make good decisions. By diversity of knowledge, in this case, I mean a broad sampling of the swarm’s options. The more choices, the better. By sending out hundreds of scouts at the same time, each swarm collected a wealth of information about the neighborhood and the nest boxes, and it did so in a distributed and decentralized way. None of the bees tried to visit all five of the boxes to rate which one was the best. Nor did they submit their findings to some executive committee for a final decision, as workers in a corporation might do. Instead, these hundreds of scouts each provided unique information about the various sites to the group as a whole in what Seeley and Visscher described as a “friendly competition of ideas.”

Equally important, every scout evaluated nest sites for herself. If a scout was impressed by another scout’s dance, she might fly to the box being advertised and conduct her own inspection, which could last as long as an hour. But she would never blindly follow another scout’s opinion by dancing for a site she hadn’t visited. That would open the door to untested information being spread like a rumor. Or, to use the stock-broker analogy, a bee wouldn’t invest in a company just because its stock was on the rise. She’d check out its fundamentals first.

Meanwhile, as the scout bees continued their search, the swarm was busily ranking each option. This was determined by the number of bees visiting each site. The more visitors, the more “votes” for the site. Though the best nest box wasn’t discovered first by the scouts, it quickly attracted the attention of numerous bees. Scouts returning from the excellent box had no trouble convincing others to check it out, largely because they danced for it so vigorously—performing as many as a hundred dance circuits each, compared to only a dozen or so danced by bees for lesser sites. A dance of that length could take five minutes, compared with thirty seconds for a shorter dance, so it was much more likely to be noticed by scouts walking around on the surface of the cluster. And once the number of bees advertising the best box increased, support for it shot up, as interest in the mediocre sites faded away.

“This careful tuning of dance strength by the scouts created a powerful positive feedback,” Seeley explained, “which caused support for the best site to snowball exponentially.” This was a crucial mechanism, because it meant that even small differences in the quality of nests were exaggerated—their “signals” were amplified—making it much more likely that support for the best site would surge ahead.

As more and more bees gathered at the first-rate box, fewer and fewer lingered at the others. That was because scouts returning from boxes for the second or third time were dancing fewer circuits for them each time, whether they’d visited the excellent box or the mediocre ones. Scouts that had visited poor sites quit dancing first. Seeley and Visscher described this mechanism as the dance “decay rate.” It meant that support for less attractive boxes would dwindle automatically—even as the number of bees collecting at the superior box kept growing—in a decision-making process that lasted from two to five hours during the test. In technical terms, this represented a balancing, or negative feedback, preventing the swarm from choosing too fast and making a mistake. These were the factors steering the bees’ problem-solving machine—exponential recruitment on the accelerator, dance decay rate on the brakes.

Meanwhile, something critical was happening at the nest boxes. As soon as the number of bees visible near the entrance to the best box reached fifteen or so, Seeley and Visscher noticed a new behavior among the scouts. Those returning from the box started plowing through bees in the main cluster, producing a special signal called “worker piping.”

“It sounds like nnneeeep, nnneeeep! Like a race car revving up its engine,” Seeley says. “It’s a signal that a decision has been reached and it’s time for the rest of the swarm to warm up their wing muscles and prepare to fly.” Scouts from the excellent box, in other words, were announcing that a quorum had been reached. Enough bees had “voted” for the most attractive box by gathering there at the same time. A new home had been chosen.

The number fifteen, it turns out, was the threshold level for the quorum. Although this number might seem arbitrary at first glance, it turns out to be anything but that. Like the dance decay rate, the threshold level represents a finely tuned mechanism of emergence. To gather that many bees at the entranceway simultaneously, it takes as many as 150 scouts traveling back and forth between the box and the main swarm cluster, which means that a majority of the bees taking part in the selection process have committed themselves to the site.

Once the quorum was reached, the final step was for scouts to lead the rest of the group to the chosen site. Most of the swarm, some 95 to 97 percent, had been resting during the whole decision-making process, conserving their energy for the work ahead. Now, as the scouts scrambled through the crowd, they stopped from time to time to press their thoraxes against other bees to vibrate their wing muscles, as if to say, warm up, warm up, get ready to fly. A final signal, called the buzz run, in which the scouts bulldoze through sleepy workers and buzz their wings dramatically, triggered the takeoff. At that point, the whole swarm flew away to its new home—which, to nobody’s surprise, turned out to be the best nest box.

The swarm chose successfully, in short, because it made the most of its diversity of knowledge. By tapping into the unique information collected by hundreds of scouts, it maximized its chances of finding the best solution. By setting the threshold level high enough to produce a good decision, it minimized its chances of making a big mistake. And it did both in a timely manner under great pressure to be accurate.

The swarm worked so efficiently, in fact, you might be tempted to imagine it as a complicated Swiss watch, with hundreds of tiny parts, each one smoothly performing its function. Yet the reality is much more interesting. To watch a swarm in the midst of deliberation is to witness a chaotic scene not unlike the floor of a commodities market, with dozens of brokers shouting out orders at the same time. Bees coming and going. Scouts dancing this way or that. Uncommitted bees milling around. The way they make decisions looks very messy, which is also very beelike. Natural selection has fashioned a system that is not only tailor-made for their extraordinary talents for cooperation and communication but also forgiving of their tendency to be unpredictable. It is from this controlled messiness that the wisdom of the hive emerges.

Seek a diversity of knowledge. Encourage a friendly competition of ideas. Use an effective mechanism to narrow your choices. These are the lessons of the swarm’s success. They also happen be the same rules that enable certain groups of people to make smart decisions together—from antiterrorism teams to engineers in aircraft factories—through a surprising phenomenon that has come to be known as the “wisdom of crowds.”

The Wisdom of Crowds

In early 2005, Jeff Severts, a vice president at Best Buy, decided to try something different. Severts had recently attended a talk by James Surowiecki, whose bestseller The Wisdom of Crowds claims that, under the right circumstances, groups of nonexperts can be remarkably insightful. In some cases, Surowiecki argues, they can be even more intelligent than the most intelligent people in their ranks. Severts wondered if he might be able to tap into such braininess at Best Buy. As an experiment, in late January 2005 he sent e-mails to several hundred employees throughout the company, asking them to predict sales of gift cards in February. He got 192 replies. In early March, he compared the average of these estimates to actual sales for the month. The collective estimate turned out to be 99.5 percent accurate—almost 5 percent better than the figure produced by the team responsible for sales forecasts.

“I was surprised at how eerily accurate the crowd’s estimates were,” Severts says.

In his book about smart crowds, Surowiecki cites similar examples of otherwise ordinary people making extraordinary decisions. Take the quiz show Who Wants to Be a Millionaire? Contestants stumped by a question are given the option of telephoning an expert friend for advice or of polling the studio audience, whose votes are averaged by a computer. “Everything we know about intelligence suggests that the smart individual would offer the most help,” Surowiecki writes. “And in fact the ‘experts’ did okay, offering the right answer—under pressure—almost 65 percent of the time. But they paled in comparison to the audiences. Those random crowds of people with nothing better to do on a weekday afternoon than sit in a TV studio picked the right answer 91 percent of the time.”

Although Surowiecki readily admits that such stories by themselves don’t amount to scientific proof, they do raise a good question: If hundreds of bees can make reliable decisions together, why should it be so surprising that groups of people can too? “Most of us, whether as voters or investors or consumers or managers, believe that valuable knowledge is concentrated in a very few hands (or, rather, in a very few heads). We assume that the key to solving problems or making good decisions is finding that one right person who will have the answer,” Surowiecki writes. But often that’s a big mistake. “We should stop hunting and ask the crowd (which, of course, includes the geniuses as well as everyone else) instead. Chances are, it knows.”

Severts was so impressed by his first few efforts to harness collective wisdom at Best Buy that he and his team began experimenting with something called prediction markets, which represent a more sophisticated way of gathering forecasts about company performance from employees. In a prediction market, an employee uses play money to bid on the outcome of a question, such as “Will our first store in China open on time?” A correct bid pays $100, an incorrect bid pays nothing. If the current price of a share in the market for a bid that yes, the store will open on time, is $80, for example, that means the entire group believes there’s an 80 percent chance that that will happen. If an employee is more optimistic, believing there’s a 95 percent chance, he might take the bet, seeing an opportunity to earn $15 per share. In the case of the new store, which had been scheduled to open in Shanghai in December 2006, the prediction market took a dive, falling from $80 a share to $50 eight weeks before the opening date—even though official company forecasts at the time were still positive. In the end, the store opened a month late.

“That first drop was an early warning signal,” Severts says. “Some piece of new information came into the market that caused the traders to radically change their expectations.” What that new information might have been about, Severts never found out. But to him it didn’t really matter. The prediction market had proved its ability to overcome the many barriers to effective communication in a large company. If anyone was listening, the alarm bells were ringing loud and clear.

As this story suggests, there may be several good reasons for companies to pay attention to prediction markets, which are good at pulling together information that may be widely scattered throughout a corporation. For one thing, they’re likely to provide unbiased outlooks. Since bids are placed anonymously, markets may reflect the true opinions of employees, rather than what their bosses want them to say. For another thing, they tend to be relatively accurate, since the incentives for bidders to be correct—from T-shirts to cash prizes—encourage them to get it right, using whatever unique resources they might have.

Above and beyond these factors is the powerful way prediction markets leverage the simple mathematics of diversity of knowledge, which, when applied with a little care, can turn a crowd of otherwise unremarkable individuals into a comparative genius. “If you ask a large enough group of diverse, independent people to make a prediction or estimate a probability, and then average those estimates, the errors each of them makes in coming up with an answer will cancel themselves out,” Surowiecki explains. “Each person’s guess, you might say, has two components: information and error. Subtract the error, and you’re left with the information.”

The house-hunting bees demonstrate this math very clearly. When several scouts return to the swarm from checking out the same perfect tree hollow, for example, they frequently give it different scores—like opinionated judges at an Olympic ice-skating competition. One bee might show great enthusiasm for such a high-quality site, dancing fifty waggle runs for it. Another might dance only thirty runs for it, while a third might dance only ten, even though she, too, approves of the site.

Scouts returning from a less attractive site, meanwhile, like a hole in a stone wall, might be reporting their scores on the swarm cluster at the same time, and they could show just as much variation. Let’s say these three bees dance forty-five runs, twenty-five runs, and five runs, respectively, in support of this mediumquality site. “You might think, gosh, this thing looks like a mess. Why are they doing it this way?” Tom Seeley says. “If you were relying on just one bee reporting on each site, you’d have a real problem, because one of the bees that visited the excellent site danced only ten runs, while one of the bees that visited the medium site did forty-five.” That could easily mislead you.

Fortunately for the bees, their decision-making process, like that of Olympics, doesn’t rely on the opinion of any single individual. Just as the scores given by the international judging committee are averaged after each skater’s performance, so the bees combine their assessments through competitive recruitment. “At the individual level, it looks very noisy, but if you say, well, what’s the total strength of all the bees from the excellent site, then the problem disappears,” Seeley explained. Add the three scores for the tree hollow—fifty, thirty, and ten—and you get a total of ninety waggle runs. Add the scores for the hole in the wall—forty-five, twenty-five, and five—and you get seventy-five runs. That’s a difference of fifteen runs, or 20 percent, between the two sites, which is more than enough for the swarm to choose wisely.

“The analogy is really quite powerful,” Surowiecki says. “The bees are predicting which nest site will be best, and humans can do the same thing, even in the face of exceptionally complex decisions.”

The key to such calculations, as we saw earlier, is the diversity of knowledge that individuals bring to the table, whether they’re scout bees, astronauts, or members of a corporate board. The more diversity the better—meaning the more strategies for approaching problems, the better; the more sources of information about the likelihood of something taking place, the better. In fact, Scott Page, an economist at the University of Michigan, has demonstrated that, when it comes to groups solving problems or making predictions, being different is every bit as important as being smart.

“Ability and diversity enter the equation equally,” he states in his book, The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. “This result is not a political statement but a mathematical one, like the Pythagorean Theorem.”

By diversity, Page means the many differences we each have in the way we approach the world—how we interpret situations and the tools we use to solve problems. Some of these differences come from our education and experience. Others come from our personal identity, such as our gender, age, cultural heritage, or race. But primarily he’s interested in our cognitive diversity—differences in the problem-solving tools we carry around in our heads. When a group is struggling with a difficult problem, it helps if each member brings a different mix of tools to the job. That’s why, increasingly, scientists collaborate on interdisciplinary teams, and why companies seek out bright employees who haven’t all graduated from the same schools. “When people see a problem the same way, they’re likely all to get stuck at the same solutions,” Page writes. But when people with diverse problem-solving skills put their heads together, they often outperform groups of the smartest individuals. Diversity, in short, trumps ability.

The benefits of diversity are particularly evident in tasks that involve combining information, such as finding a single correct answer to a question. To show how this works, Page takes us back to the quiz show Who Wants to Be a Millionaire? Imagine, he writes, that a contestant has been stumped by a question about the Monkees, the pop group invented for TV who became so popular they sold more records in 1967 than the Beatles and Elvis combined. The question: Which person from the following list was not a member of the Monkees?

(a) Peter Tork

(b) Davy Jones

(c) Roger Noll

(d) Michael Nesmith

Let’s say the studio audience this afternoon has a hundred people in it, Page proposes, and seven of them are former Monkees fans who know that Roger Noll was not a member of the group (he’s actually an economist at Stanford). When asked to vote, these people choose (c). Another ten people recognize two of the names on the list as belonging to the Monkees, leaving Noll and one other name to choose from. Assuming they choose randomly between the two, that means (c) is likely to get another five votes from this group. Of the remaining audience members, fifteen recognize only one of the names, which means another five votes for (c), using the same logic. The final sixty-eight people have no clue, splitting their votes evenly among the four choices, which means another seventeen votes for (c). Add them up and you get thirty-four votes for Roger Noll. If the other names get about twenty-two votes each, as statistical laws suggest, then Noll wins—even though 93 percent of the audience is basically guessing. If the contestant follows the audience’s advice, he climbs another rung on the ladder to the show’s million-dollar prize.

The principle at work in this example, as Page explains, was described in the fourth century B.C. by Aristotle, who noted that a group of people can often find the answer to a puzzle if each member knows at least part of the solution. “For each individual among the many has a share of excellence and practical wisdom, and when they meet together, just as they become in a manner one man, who has many feet, and hands, and senses, so too with regard to their character and thought,” Aristotle writes in Politics. The effect might seem magical, Page notes, but “there is no mystery here. Mistakes cancel one another out, and correct answers, like cream, rise to the surface.”

This does not mean, he cautions, that diversity is a magic wand you can wave at any problem and make it go away. It’s important to consider what kind of task you’re facing. “If a loved one requires open-heart surgery, we do not want a collection of butchers, bakers, and candlestick makers carving open the chest cavity. We’d much prefer a trained heart surgeon, and for good reason,” Page writes. Nor would we expect a committee of people who deeply hate each other to come up with productive solutions. There are limits to the magic of the math.

You have to use common sense when weighing the impact of diversity. For simple tasks, it’s not really necessary (you don’t need a group to add two and two). For truly difficult tasks, the group must be reasonably smart (no one expects monkeys banging on typewriters to come up with the collected works of Shakespeare). The group also must be diverse (otherwise you have nothing more to work with than the smartest expert does). And the group must be large enough and selected from a deep enough pool of individuals (to ensure that the group possesses a wide-ranging mix of skills). Satisfy all four of these criteria, Page says, and you’re good to go.

Surowiecki would emphasize one point in particular: If you want a group to make good decisions, you must ensure that its members don’t interact too much. Otherwise they could influence one another in counterproductive ways through imitation or intimidation—especially intimidation. “In any organization, like a team or company, people tend to pay very close attention to bosses or those with higher status,” Surowiecki says. “That can be very damaging, from my perspective, because one of the great things about the wisdom of crowds, or whatever you want to call it, is that it recognizes that people may have useful things to contribute who aren’t necessarily at the top. They may not be the ones everyone automatically looks to. And that goes by the wayside when people imitate those at the top too closely.”

Diversity. Independence. Combinations of perspectives. These principles should sound familiar. They’re versions of the lessons we learned from the honeybees: Seek a diversity of knowledge. Encourage a friendly competition of ideas. Use an effective mechanism to narrow your choices. What was smart for the honeybees is smart for groups of people, too.

It’s not so easy, after all, to make decisions as efficiently as honeybees do. With millions of years of evolution behind them, they’ve fashioned an elegant system that fits their needs and abilities perfectly. If we could do as well—if we could harness our diversity to overcome our bad habits—then perhaps people wouldn’t say that we’re still thinking with caveman brains.

Caveman Brains

Imagine this scenario: Intelligence agencies have turned up evidence of a plot by at least three individuals to carry out a terrorist attack in Boston. Exactly what kind of attack is not known, but it might be related to a religious conference being held in the city. Possible targets include the Episcopal Church of St. Paul, Harvard’s Center for World Religion, One Financial Plaza, and the Federal Reserve Bank. Security cameras at each building have captured blurry images of ten different individuals acting suspiciously during the past week, though none have been positively identified as terrorists. Intercepted e-mail between suspects appears to include simple code words, such as “crabs” for explosives and “bug dust” for diversions. Time’s running out to crack the plot.

This was the fictional situation presented to fifty-one teams of college students during a CIA-funded experiment at Harvard not long ago. Each four-person team was simulating a counterterrorism task force. Their assignment: sort through the evidence to identify the terrorists, figure out what they were planning to do, and determine which building was their target. They were given an hour to complete the task.

The experiment was organized by Richard Hackman and Anita Woolley, a pair of social psychologists, with collaborators Margaret Giabosi and Stephen Kosslyn. A few weeks earlier, they’d given the students a battery of tests to find out who was good at remembering code words (verbal working memory) and who was good at identifying faces from a large set of photos (face-recognition ability), skills that tap separate functions of the brain. They used the results of these tests to assign students to teams, arranging it so that some teams had two experts (students who scored unusually high on either verbal or visual skills) and two generalists (students who scored average on both skills), and some teams had all generalists. This was important, because they wanted to find out if a team’s cognitive diversity really affected its performance as strongly as did its level of skills.

The researchers had another goal. They wanted to see if a group’s performance might be improved if its members took time to explicitly sort out who was good at what, put each person to work on an appropriate task—such as decoding e-mails or studying images—and then talked about the information they turned up. Would it enable them, in other words, to exploit not only their diversity of knowledge but also their diversity of abilities? To find out, they told all of the teams how each member had scored on the skills tests, but they coached only half of the teams on how to make task assignments. They left the other half on their own.

The researchers had hired a mystery writer to dream up the terrorist scenario. The solution was that a fictional anti-Semitic group was planning to spray a deadly virus in the vault at the Federal Reserve Bank where Israel stores its gold, thereby making it unavailable for months and supposedly bankrupting that nation. “We made it a little bit ridiculous because we didn’t want to scare anybody,” Woolley says.

Who did the best job at solving the puzzle? Not surprisingly, the most successful teams—the ones that correctly identified the target, terrorists, and plot details—were those with experts that applied their skills appropriately and actively collaborated with one another. What no one expected, however, was that the teams with experts who made little effort to coordinate their work would do so poorly. They did even worse, in fact, than teams that had no experts at all.

“We filmed all the teams and watched them several times,” Woolley says. “What seems to happen is that, when two of the people are experts and two are not, there’s a status thing that goes on. The two that aren’t experts defer to the two that are, when in fact you really need information from all four to answer the problem correctly.”

Why was this disturbing? Because that’s how many analytic teams function in real life, Woolley says, whether they’re composed of intelligence agents interpreting data, medical personnel making a diagnosis, or financial teams considering an investment. Smart people with special skills are often put together to make important decisions, but they’re frequently left on their own to figure out how to apply those skills as a group. Because they’re good at what they do, many talented people don’t feel it’s necessary to collaborate. They don’t see themselves as a group. As a result, they often fail to make the most of their collective talents and end up making a poor decision.

“We’ve done a bunch of field research in the intelligence community and I can tell you that no agency, not the Defense Department, not the CIA, not the FBI, not the state police, not the Coast Guard, not drug enforcement, has everything they need to figure out what’s going on,” Hackman told a workshop on collective intelligence at MIT. “That means that most antiterrorism work is done by teams from multiple organizations with their own strong cultures and their own ways of doing things. And the stereotypes can be awful. You see the intelligence people looking at the people from law enforcement saying, You guys are not very smart, all you care about is your badge and your gun. We know how to do this work, okay? And the law enforcement people saying, You guys wouldn’t recognize a chain of evidence if you tripped over it. All you can do is write summa cum laude essays in political science at Princeton. That’s the level of stereotyping. And they don’t get over it, so they flounder.”

Personal prejudice is a poor guide to decision making, of course. But it’s only one in a long list of biases and bad habits that routinely hinder our judgment. During the past fifty years, psychologists have identified numerous “hidden traps” that subvert good decisions, whether they’re made by business executives, political leaders, or consumers at the mall. Many can be traced to the sort of mental shortcuts we use every day to manage life’s challenges—the rules of thumb we apply unconsciously because our brains, unlike those of ants or bees, weren’t designed to tackle problems collectively.

Consider the trap known as “anchoring,” which results from our tendency to give too much weight to the first thing we hear. Suppose someone asks you the following questions:

Is the population of Chicago greater than 3 million?

What’s your best estimate of Chicago’s population?

Chances are, when you answer the second question, you’ll be basing it on the first. You can’t help it. That’s the way your brain is hardwired. If the number in the first question was 10 million, your answer to the second one would be significantly higher. Late-night TV commercials exploit this kind of anchoring. “How much would you pay for this slicer-dicer?” the announcer asks. “A hundred dollars? Two hundred? Call now and pay only nineteen ninety-five.”

Then there’s the “status quo” trap, which stems from our preference not to rock the boat. All things being equal, we prefer options that keep things the way they are, even if there’s no logic behind that choice. That’s one reason mergers often run into trouble, according to John Hammond, Ralph Keeney, and Howard Raiffa, who described “The Hidden Traps in Decision Making” in the Harvard Business Review. Instead of taking swift action to restructure a company following a merger, combining departments and eliminating redundancies, many executives wait for the dust to settle, figuring they can always make adjustments later. But the longer they wait, the more difficult it becomes to change the status quo. The window of opportunity closes.

Nobody likes to admit a mistake, after all. Which leads to the “sunk-cost” trap, in which we choose courses of action that justify our earlier decisions—even if they no longer seem so brilliant. Hanging on to a stock after it has taken a nosedive may not show the best judgment. Yet many people do exactly that. In the workplace, we might avoid admitting to a blunder—hiring an incompetent person, for example—because we’re afraid it will make us look bad in the eyes of our superiors. But the longer we let the problem drag on, the worse it can be for everyone.

As if these flaws weren’t enough, we also ignore facts that don’t support our beliefs. We overestimate our ability to make accurate predictions. We cling to inaccurate information even after it has been disproved. And we accept the most recent bit of trivia as gospel. As individuals, in short, we tend to make a lot of mistakes with even simple decisions. Throw a problem at us that involves interactions of multiple variables and you’re asking for trouble.

Yet increasingly, analysts say, that’s exactly what business leaders are dealing with. “Managers have long relied on their intuition to make strategic decisions in complex circumstances, but in today’s competitive landscape, your gut is no longer a good enough guide,” writes Eric Bonabeau, who is now chief scientist at Icosystem, a consulting company near Boston. Often managers rise to the top of their organizations because they’ve been able to make tough decisions in the face of uncertainty, he writes. But when you’re dealing with complexity, intuition “is not only unlikely to help, it is often misleading. Human intuition, which arguably has been shaped by biological evolution to deal with the environment of hunters and gatherers, is showing its limits in a world whose dynamics are getting more complex by the minute.”

We aren’t very good at making difficult decisions in complex situations, in other words, because our brains haven’t had time to evolve. “We have the brains of cavemen,” Bonabeau says. “That’s fine for problems that don’t require more than a caveman’s brain. But many other problems require a little more thinking.”

One way to handle such problems, as we’ve seen, is to harness the cognitive diversity of a group. When Jeff Severts asked his prediction market to estimate the probability of the new Best Buy store opening on time, he tapped into a wide range of perspectives, and the result was an unbiased assessment of the situation. In a way, that’s what most of us would hope would happen, since society counts on groups to be more reliable than individuals. That’s why we have juries, committees, corporate boards, and blue-ribbon panels. But groups aren’t perfect either. Unless they’re carefully structured and given an appropriate task, groups don’t automatically produce the best solution. As decades of research have demonstrated, groups have many bad habits of their own.

Take their tendency to ignore useful information. When a group discusses an issue, it can spend too much time going over stuff everybody already knows, and too little time considering facts or points of view known only by a few. Psychologists call this “biased sampling.” Let’s say your daughter’s PTA is planning a fund-raiser. The president asks everybody at the meeting for ideas about what to sell. The group spends the whole time talking about cookies, because everybody knows how to make them, even though many people might have special family recipes for cupcakes, fudge, or other goodies that might be popular. Because these suggestions never come up, the group may squander its own diversity.

Many mistakes made by groups can be traced to rushing a decision. Instead of taking time to put together a full range of options, a group may settle on a choice prematurely, then spend time searching for evidence to support that choice. Perhaps the most notorious example of rushing a decision is a phenomenon that psychologist Irving Janis described as groupthink, in which a tightly knit team blunders into a fiasco through a combination of unfortunate traits, including a domineering leader, a lack of diversity among team members, a disregard of outside information, and a high level of stress. Such teams develop an unrealistic sense of confidence about their decision making and a false sense of consensus. Outside opinions are dismissed. Dissension is perceived as disloyalty. Janis was thinking, in particular, of John F. Kennedy’s reckless decision to back the Bay of Pigs invasion of Cuba in 1961, when historians say that President Kennedy and a small circle of advisors acted in isolation without serious analysis or debate. As a result, when some twelve hundred Cuban exiles landed on the southern coast of the island, they were promptly defeated by the Cuban army and tossed into jail.

Decisions made by groups, in short, can be as dysfunctional as those made by individuals. But they don’t have to be, as the swarm bees have already shown us. When groups contain the right mix of individuals and are carefully structured, they can compensate for mistakes by pooling together a greater diversity of knowledge and skills than any of their members could obtain on their own. That was the lesson of the experiments Hackman and Woolley conducted in Boston: Students did better at identifying the terrorists when they sorted out the skills of each team member and gave everyone a chance to contribute information and opinions to the process. Simply by drawing from a wider range of experiences, as Scott Page’s theorems proved, groups can put together a bigger bag of tricks for problem solving. And when it comes to making predictions, like how many gift cards will be purchased this month, groups can cancel out personal biases and bad habits by combining information and attitudes into a reliable group judgment.

Smart Swarm: Using Animal Behaviour to Organise Our World

Подняться наверх