Ten years ago AlphaGo, Google DeepMind’s AI program, stunned the world by defeating the South Korean Go player Lee Sedol. And within the years since, AI has upended the sport. It’s overturned centuries-old principles about the perfect moves and introduced entirely latest ones. Players now train to duplicate AI’s moves as closely as they’ll reasonably than inventing their very own, even when the machine’s considering stays mysterious to them. Today, it is basically inconceivable to compete professionally without using AI. Some say the technology has drained the sport of its creativity, while others think there continues to be room for human invention. Meanwhile, AI is democratizing access to training, and more female players are climbing the ranks consequently.
For Shin Jin-seo, the top-ranked Go player on the planet, AI is a useful training partner. Every morning, he sits at his computer and opens a program called KataGo. Nicknamed “Shintelligence” for a way closely his moves mimic AI’s, he traces the glowing “blue spot” that represents this system’s suggestion for the perfect next move, rearranging the stones on the digital grid to try to grasp the machine’s considering. “I consistently take into consideration why AI selected a move,” he says.
When training for a match, Shin spends most of his waking hours poring over KataGo. “It’s almost like an ascetic practice,” he says. In keeping with a study in 2022 by the Korean Baduk League, Shin’s moves match AI’s 37.5% of the time, well above the 28.5% average the study found amongst all players.
“My game has modified lots,” says Shin, “because I actually have to follow the directions suggested by AI to some extent.” The Korea Baduk Association says it has reached out to Google DeepMind within the hopes of arranging a match between Shin and AlphaGo, to commemorate the tenth anniversary of its victory over Lee. A spokesperson for Google DeepMind said the corporate couldn’t provide information right now. But when a brand new match does occur, Shin, who has trained on more advanced AI programs, is optimistic that he’d win. “AlphaGo still had some flaws then, so I believe I could beat it if I goal those weaknesses,” he says.
AI rewrites the Go playbook
Go is an abstract strategy board game invented in China greater than 2,500 years ago. Two players take turns placing black and white stones on a 19×19 grid, aiming to overcome territory by surrounding their opponent’s stones. It’s a game of striking mathematical complexity. The variety of possible board configurations—roughly 10170—dwarfs the variety of atoms within the universe. If chess is a battle, Go is a war. You suffocate your enemy in a single corner while warding off an invasion in one other.
To coach AI to play Go, an enormous trove of human Go moves are fed right into a neural network, a computing system that mimics the online of neurons within the human brain. AlphaGo, which was later christened AlphaGo Lee after its victory over Lee Sedol, was trained on 30 million Go moves and refined by playing thousands and thousands of games against itself. In 2017, its successor, AlphaGo Zero, picked up Go from scratch. Without studying any human games, it learned by playing against itself, with moves based only on the principles of the sport. The blank-slate approach proved more powerful, unconstrained by the bounds of human knowledge. After three days of coaching, it beat AlphaGo Lee 100 games to zero.
