Читать книгу The Digital War - Winston Ma - Страница 34
Match Impossible
ОглавлениеChina is the birthplace of weiqi (Go chess), an ancient board game played on a 19x19 grid. In Go, two players place black or white stones on the grid, each seeking to seal off the most territory. Historical records show it was played as early as the Zhou dynasty (1046 BC–256 BC). The match took place in Wuzhen, Zhejiang province, where there is a canal more than 1300 years old—a fitting venue for a game that dates back thousands of years. Wuzhen also hosted China's annual World Internet Conference, creating a parallel link to the digital power of AlphaGo.
In contrast to the long history of Go within Chinese culture, the development of AlphaGo was only three years old at the 2018 match. Go is seen as an extremely difficult game for computers to master because there are more possible board configurations in Go than there are atoms in the visible universe. Furthermore, human players believe that winning multiple battles across the board relies heavily on intuition and strategic thinking and that a software algorithm cannot simply memorize all combinations of board pieces, assess the situation by calculating all possible moves, and select a strategy to win, like in chess.
As such, Go has been a benchmark for measuring the human mind against artificial intelligence after IBM's Deep Blue beat chess grand master Garry Kasparov in 1997. For many years, there was little progress. More recently, the AlphaGo program developed by Google's DeepMind managed to analyze the game in a different way. AlphaGo used two sets of “deep neural networks” containing millions of connections similar to neurons in the brain—one that selects its next move while the other evaluates the decision.
The Google programmers provided AlphaGo with a database of 30 million board positions drawn from 160 000 real-life games to analyze, and the program was also partly self-taught, having played millions of games against itself following its initial programming (“machine learning”), all the while learning and improving. AlphaGo's success was considered the most significant yet for AI, due to the complexity of Go game, which has an incomputable number of possible scenarios, and in particular emphasizes the importance of “intuition” or “instinct” that is thought to be reserved for humans only.
After AlphaGo beat the best human player, Google developed a more advanced version, AlphaGo Zero, which was not trained by historical data of games between humans at all. Instead, AlphaGo Zero was only taught of the Go chess game rules before it started self-training by playing games against itself. Within a few days, AlphaGo Zero easily beat AlphaGo. Clearly, in addition to the quantity of data, other factors, like new algorithms, computing power, and the kinds of data available, may be just as valuable for AI training.
As if to respond to the humans' confusion, fear, and depression, at the end of the match the DeepMind Lab team announced that AlphaGo would retire from competing against human players. Instead, the team would largely shift toward using AI to solve problems in health, energy, and other fields. There was no doubt about AI's overwhelming superiority, at least in the game of Go.
First, the AI program showed more understanding of Go than humans, even to the extent of perfection. From time to time, AlphaGo put down seemingly randomly placed stones to set up winning positions. Those surprises kept coming in all three games, with the AI program making “unconventional” and “interesting” moves against Ke Jie. In a later interview, Ke Jie vowed never again to subject himself to the “horrible experience” because “he had had enough”.
“For human beings”, a visibly flummoxed Ke Jie commented with a resigned expression, “our understanding of the Go game is really very limited”. Meanwhile, “AlphaGo to me is 100% perfection”, he added, showing feelings of helplessness and depression. Even for the world's No.1 player, confronting an enemy that never makes mistakes and always picks the best possible moves ahead of its rival was no longer a competition, but torture.
Second, the AI program had no emotions or feelings, which seemed to be another advantage over humans. In close games, that may have given AlphaGo an edge. Toward the end of the second match, Ke Jie was visibly agitated, tugging his hair, rubbing his chest, and laying his head on the table from time to time. After the game he confessed that, when he thought he might have had a chance at winning in the middle of the game, he got too keyed up to keep calm. “I was very excited. I could feel my heart bumping”, he said. “Maybe because I was so excited I made some stupid moves”.
Third, the AI program made the Go game more interesting. One-time world champion Shi Yue commented that, in games between human players he had never seen moves like AlphaGo's and was unlikely to in the future. The question to follow is whether there is still value in human-versus-human games? If games between AI programs become more interesting and unpredictable, the existential value of professional Go players could be questioned.
Table 1.1 The speedy ascent (and retirement) of AlphaGo
November 2015 | DeepMind organized a secret match with Fan Hui, Chinese 2-dan pro and winner of several European championships. AlphaGo won 3–2 in unofficial training games, and won 5–0 in the official match |
January 2016 | DeepMind published a paper in the journal Nature describing the AI system behind the AlphaGo version that beat Fan Hui. The team also announced a five-game match against Lee Sedol, the top player of the previous 10 years |
March 2016 | The upgraded version of AlphaGo played a best-of-five match against 9-dan Lee Sedol, the multiple world champion from South Korea, and won 4–1 |
January 2017 | A new, upgraded version of AlphaGo (called “Master”) won 60–0 against most top professionals from China, Korea and Japan in fast-move games (mostly 30 seconds per move) |
May 2017 | AlphaGo defeated 9-dan Ke Jie, the reigning top-ranked player from China, 3–0 in a best-of-three match |
May 2017 | DeepMind team announced that Alphago would "retire" from competing against human players |
Note: For the Go chess, professional ranks in China, Japan, and Korea all start at 1-dan and go up to 9-dan, the best players being 9-dan.
Most strikingly, the leaps in AI power happened in a short period of time. The ascent of AlphaGo to the top of the Go world was distinct from the trajectory of machines playing chess games. Because of its vast number of possible scenarios (in the order of magnitude numbering 10 to the power of 360!), many professional players estimated that it may take AI at least another 10 years before it could outperform top human players. However, the development and perfection of AlphaGo spanned less than three years (see Table 1.1). “Last year, I think the way AlphaGo played [against Lee Sedol of South Korea] was still quite human-like, but today I think he plays like the God of Go”, Ke Jie said after the game.
As such, AlphaGo's superior calculation power stripped Chinese audiences of their initial curiosity about AI and threw them into confusion. Almost overnight, the internet business community in China started discussing about “the second half game” of the mobile internet economy which, in 2013–2016, led a boom in e-commerce and online entertainment. Since 2017, the new key words have become data and intelligence, and the resolve to close the gap with—and quickly surpass—Silicon Valley in deploying AI is prevalent across the country.