Читать книгу SuperCooperators - Roger Highfield - Страница 21

TAKE ADVANTAGE OF MISTAKES

Оглавление

As I studied for my doctorate with Karl, we devised a way to take confusion, slips, and mistakes into account. In the jargon, instead of the conventional deterministic strategies we used probabilistic strategies, where the outcome of the game becomes more fuzzy and random. We decided to explore the evolution of cooperation when there is noise by holding a probabilistic tournament in a computer, building on Axelrod’s pioneering work. The idea was to use a spectrum of strategies, generated at random by mutation and evaluated by natural selection.

All of our strategies were influenced by chance. They would cooperate with a certain probability after the opponent had cooperated and they would also cooperate with a certain probability after the opponent had defected. Think of it this way: we are able to put varying shades of “forgiveness” in the set of strategies that we explore. Some forgive one out of two times. Others one out of five defections, and so on. And some strategies, of course, are unbending. These Old Testament style strategies almost never forgive. As was the case with the Grim strategy, they refuse ever to cooperate again after an opponent has defected only once.

To study the evolution of cooperation, we seasoned the mix with the process of natural selection so that winning strategies multiplied while less successful rivals fell by the wayside and perished. The strategies that got the most points would be rewarded with offspring: more versions of themselves, all of which would take part in the next round. Equally, those that did badly were killed off. For extra realism, we arranged it so that reproduction was not perfect. Sometimes mutation could seed new strategies.

Now Karl and I could sit back and watch the strategies slug it out in our creation over thousands and thousands of generations. Our fervent hope was that one strategy would emerge victorious. Even though no evolutionary trajectory ever quite repeated itself, there were overall patterns and consistency in what we observed. The tournament always began with a state of “primordial chaos.” By this I mean that there were just random strategies. Out of this mess, one, Always Defect, would inevitably take an early lead: as is so often seen in many Hollywood movies, the baddies get off to a flying start.

For one hundred generations or so, the Always Defect strategy dominated our tournament. The plot of life seemed to have a depressing preface in which nature appeared cold-eyed and uncooperative. But there was one glimmer of hope. In the face of this unrelenting enemy, a beleaguered minority of Tit for Tat players clung on at the edge of extinction. Like any Hollywood hero, their time in the sun would eventually come: when the exploiters were left with no one left to exploit, and all the suckers had been wiped out, the game would suddenly reverse direction. Karl and I took great pleasure in watching the Always Defectors weaken and then die out, clearing a way for the triumphant rise of cooperation.

When thrown into a holdout of die-hard defectors, a solitary Tit for Tat will do less well than defecting rotters, because it has to learn the hard way, always losing the first round, before switching into retaliatory mode. But when playing other Tit for Tat–ers, it will do significantly better than Always Defect and other inveterate hard-liners. In a mixture of players who adopt Always Defect and Tit for Tat, even if the latter only makes up a small percentage of the population, the “nice” policy will start multiplying and quickly take over the game. Often the defectors do so poorly that they eventually die out, leaving behind a cooperative population consisting entirely of Tit for Tat.

But Karl and I were in for a surprise. In our computer tournaments, Tit for Tat–ers did not ultimately inherit the Earth. They eventually lost out to their nicer cousins, who exploited Tit for Tat’s fatal flaw of not being forgiving enough to stomach the occasional mishap. After a few generations, evolution will settle on yet another strategy, which we called Generous Tit for Tat, where natural selection has tuned the optimum level of forgiveness: always meet cooperation with cooperation, and when facing defection, cooperate for one in every three encounters (the precise details actually depend on the value of the payoffs being used). So as not to let your opponent know exactly when you were going to be nice, which would be a mistake (John Maynard Smith’s Tit for Two Tats strategy could be easily exploited by alternating cooperation and defection), the recipe for forgiveness was probabilistic, so that the prospect of letting bygones be bygones after a bad move was a matter of chance, not a certainty. Generous Tit for Tat works in this way: never forget a good turn, but occasionally forgive a bad one.

Generous Tit for Tat can easily wipe out Tit for Tat and defend itself against being exploited by defectors. The Generous strategy dominates for a very long time. But, due to the randomness in our tournaments, it does not rule forever. We observed how slowly, almost imperceptibly, a population of Generous Tit for Tat mutates and drifts toward more and more lenient strategies. Ultimately, the population becomes uniformly nice: all cooperate. The reason is that when everybody tries to be nice, forgiveness pays handsomely. There is always an incentive to forgive quicker and quicker because the highest rewards come from having many productive (that is, cooperative) interactions. Now, at last, defectors have a chance to rise up again, with the help of the right mutation. A population of nice players who always cooperate is dry tinder for an invasion by any lingering or newly emerged defector. In this way, the cycle starts anew.

These probabilistic games are always different in detail. But there was a pattern overall. Karl and I would always see the same strategies wax and others wane. Overall, the cycles play out in a predictable way, sweeping from all defectors to Tit for Tat, to Generous Tit for Tat, then all cooperators. Finally, with a great crash, the makeup of the community lurches back to being dominated by dastardly defectors all over again.

The good news is that a reasonably nice strategy dominates the tournament. When you average out the strategies over the entire duration of a game, the most common is Generous Tit for Tat. The bad news is that, in the real world, these cycles could sweep out over years, decades, or even centuries. Plenty of anecdotal evidence suggests that these cycles turn in human history too. Kingdoms come and go. Empires spread, decline, and crumble into a dark age. Companies rise up to dominate a market and then fragment and splinter away again in the face of thrusting, innovative competitors.

Just as these tournaments never see one strategy emerge with total victory, so it seems that a mix of cooperators (law-abiding citizens) and defectors (criminals) will always persist in human societies. The same goes for beliefs. One faith rises and another declines, the very scenario that prompted Augustine to write The City of God (De civitate Dei) after Rome was sacked by the Visigoths in 410. Augustine wanted to counter claims that Rome had been weakened by adopting Christianity, but as our computer tournaments made clear, great empires are destined to decline and fall: it was more a case of delapsus resurgam—when I fall I shall rise—and vice versa.

As the latest recession has vividly underlined, and as has been noted over the past few decades, there are economic cycles too. Regulations are introduced, then people figure out clever ways to evade them over the years. Periods of hard work and grinding toil are followed by those of leniency, when people slacken, take time off, and exploit the system. In our computer simulations, had we stumbled upon a mathematical explanation for the fundamental cycles of life that endlessly whirl around phases of cooperation and defection?

SuperCooperators

Подняться наверх