Lee won just one of the matches after Google DeepMind's supercomputer made a "bad mistake" early in the fourth match and struggled to recuperate from its error. The fifth and final match, held Tuesday, was one of the closest game and resulting in gameplay well into overtime, according to The Verge. The computer won a $1 million prize in the contest, which will be donated to charity.
The series marks the first time a professional 9-dan Go player has played the complex Chinese board game against a computer. While supercomputers have beaten chess masters at their own game before, Go posed a new kind of challenge for researchers. Because of the extremely complex nature of the ancient game, the computer's victory is an unprecedented upset some experts predicted wouldn't happen for another 10 years.
Why is Go so challenging for AI?
Go is an ancient Chinese board game that has long been considered one of the great challenges faced by AI because it requires a high level of intuition and evaluation.
The idea behind Go, according to Moshe Kranc, CTO of Ness SES, who confessed he was once a Go addict, is to build up structures that can "live" because they are connected to the rest of the board’s ecosystem, based on the game’s simple rules.
"To win, one must have the visual ability to spot strong and weak positions in a single glance, as well as the analytic capacity to drill down and plot out a winning series of detailed moves," Kranc said. "After some analysis and experience, you learn to spot winning formations at a glance, but you also must be ready to back up this visual intuition by playing the correct tactical moves required to actually win a space that is theoretically yours."
By nature, deep learning divides a single complex problem into a hierarchy of problems that can be solved by multiple processing layers in real time. AlphaGo uses a complex system of deep neural networks and machine learning to play the game.
"AlphaGo uses two neural networks—a policy network for evaluating possible moves and a value network to manage the search depth of moves," said Frank Palermo, executive vice president for Global Digital Solutions at Virtusa. "These neural networks were trained on 30 million moves from games played by human experts, resulting in being able to predict the human move 57% of the time."
In other words, unsupervised feature recognition enables the computer to analyze all past recorded Go games and learn for itself what moves lead to a winning position in various conditions.
"Custom hardware enables these calculations to occur in nanoseconds. In effect, the computer can now play Go not just using analysis, but, like humans, can also rely on visual pattern matching," Kranc said. "And, the computer has inferred new winning strategies based on exhaustive analysis of past games that man has not yet discovered."
Because Go requires more than just logic, the fact that AlphaGo won multiple matches against Lee is seen by many as a huge advancement in the field of AI.
"This is rightly being celebrated as a milestone in the development of artificial intelligence technology," said Kranc. "We now seem to have a large enough arsenal of techniques and tools to create programs that can solve even the most complex problems better than any human mind."
Marco Varone, CTO of Expert System, said AlphaGo is using the same brute force approach Deep Blue and Watson used in beating the world chess champion in 1997 and the Jeopardy champion in 2011, respectively, but in a different way.
"While playing chess requires calculating and evaluating millions of possible moves every second, playing Go means finding patterns in a very effective way," Varone said.
The final AI frontier
AlphaGo's victory is a colossal moment for AI. But what’s the next frontier?
Taylor M. Wells, assistant professor of Management Information Systems at the College of Business Administration at California State University, Sacramento, has closely followed the matches between AlphaGo and Lee.
"While part of me wants to welcome our new robot overlords, I don’t think we are quite there yet," Wells said. "I do think that this underscores the importance and legitimacy of analytics (and) data science for organizations."
DeepMind has already been making significant strides in the healthcare area—a field with huge datasets that requires effective decision-making.
"I think we will likely see these intelligent systems help even more with executive decision-making," said Wells. "The challenge will lie in whether we will trust them, especially when they occasionally lose a game of Go. As my colleagues and I found in a previous study, people may choose to rely upon the decisions of software agents like AlphaGo partially because of their benefit but partially as a means of deflecting blame if they are mistaken."
Kranc, however, sees AlphaGo’s victory as a true game-changer.
"For a certain class of problems, where the rules are well-defined and there is an abundance of training data, machines have now been proven to be superior to humans," Kranc said. "For such problems, if you or your business want the best possible answer, using human brain power is folly or hubris—machines can do it better."
But, Kranc said, he also hopes people will continue to play Go and use their minds to solve problems, despite the fact that a machine may do it faster or better.
"The act of solving problems is itself stimulating and rewarding, and there is no small honor in being the world’s best human Go player, despite the superiority of DeepMind," he said.
Varone does not expect AlphaGo’s victory to be immediately transformative.
"In similar fashion to what happened after Deep Blue, things continued to evolve at a normal pace, and I am expecting that after AlphaGo’s victory we will not see any singularity happening even if this event will remain a very relevant moment in the history of AI," he said.
Varone said he does, however, expect that the visibility and the marketing success of the Go matches will positively affect AI overall.
It will "have a positive effect on investments in AI and on the motivation of smart people to work in this field," said Varone. "Implementing something that is more intelligent than what we have seen up to now is so difficult that more money and more people are more than welcome."