LawGameThis is part I of a two-part series talking about what lawyers should take from Lee Sedol’s recent loss to AlphaGo in a five-game Go match. Given the length of the series, I have cross-posted the entire piece on “The Algorithmic Society.”

A shudder of excitement went through the tech world recently and its epicenter was Seoul, South Korea. There, a computer named AlphaGo played five games of Go against Lee Sedol, a South Korean master of the game ranked fourth in the world. AlphaGo won four out of the five games. Long considered a difficult and perhaps impossible task, a computer winning at Go suggests that computers are moving closer to taking over some human tasks much sooner than we imagined. It also was a strong volley by Google to be the company whose algorithms drive the “thinking” behind the takeover.

Yet, of the approximately 1.25 million lawyers in the United States, it is safe to say that few read about and understood the significance of the victory, some saw the headlines but skipped the stories, and many did not even know the match took place. AlphaGo’s win over Sedol will be one of those moments lawyers will look back on and see as another tipping point they missed. The event that shook the technology world caused barely a tremor in the legal world.

What is Go

To understand the significance of AlphaGo’s win, you must understand something about Go. The game originated in China and dates back at least 2500 years. It was considered one of the four “essential arts” of a cultured gentlemen (the other three were calligraphy, painting, and qin, a stringed instrument). Two players do battle on a board which has a grid of 19 x 19 lines. Each player strategically places his pieces, following several rules, to block territory (space on the board). The winner protects the most territory.

There are several measures of game complexity, including game tree size, decision complexity, game tree complexity, computational complexity, and state-space complexity. Journalists often use state-space complexity to describe the relative complexity of games, because it is fairly easy to grasp: “the number of legal game positions readable from the initial position of the game.” The state-space complexity for Go has been estimated at 10 to the 174, which is more than the total number of atoms in the universe. By contrast, the state-space complexity for chess is 10 to the 120. The difference is not trivial. A computer can play chess using brute force. It can calculate the possible move combinations after each play and select the next move out of the universe of possibilities, taking into consideration various strategies. Because the number of possibilities in Go is so high, a computer cannot use brute force. Instead, it must do something to approximate human intuition. It has been described as the “pinnacle of perfect information games.”

Round 1: Checkers

Computers have been beating humans at games for more than 20 years. In perfect information games, players move alternately and each knows all of the other player’s prior moves. In 1994, a computer program named “Chinook” developed by Jonathan Schaeffer at the University of Alberta, was declared the winner in a match against the world’s top checkers player in the Man-Machine World Championship. While the victory was impressive, there was a hanging question about the computer’s abilities. It was declared the victor after drawing six times. Marion Tinsley, its human opponent, then withdrew from the match due to problems with his pancreatic cancer. Chinook never actually won a game against Tinsley.

In 1995, Chinook played Don Lafferty in a 20-game match. It won one game, lost one game, and drew 18 times. Schaefer retired Chinook from competition after that match, but he and his team continued working on the checkers problem (a program that a human could not beat). In 2007, they announced that the best any human player could achieve in a game against the updated Chinook was a draw.

Round 2: Chess

The next human loss to computers in a perfect information game happened just a few years after the Chinook defeat. Chess had long been viewed as a game that challenged the smartest humans. A computer beating a human would make quite a statement about the state of computer “intelligence.”

In 1996, IBM’s Deep Blue played Garry Kasparov in a six game match. Kasparov won 4–2. But in 1997 they played a rematch which Deep Blue won 3 1/2–2 1/2. That was the first time a computer had beaten a Grand Champion chess player in a match following tournament regulations. Deep Blue’s victory was significant, though the victory represented brute force more than elegant play. At the time, some believed that Kasparov did not bring his best to all the games in the second match and could have won had he done so. Some believed if Kasparov had played with more human intuition he would have beaten Deep Blue.

After Deep Blue defeated Kasparov, IBM wanted another challenge to show off its software. Jeopardy presented that challenge. Jeopardy is more complex for a computer than chess. First, there is the format. The host gives the answer and the contestant must respond with the correct question. Second, Jeopardy involves language interpretation. As The New York Times described it, Jeopardy is “a game that requires not only encyclopedic recall, but also the ability to untangle convoluted and often opaque statements, a modicum of luck, and quick, strategic button pressing.

In February 2011, IBM’s latest masterpiece, Watson, played Jeopardy against Ken Jennings and Brad Rutter, the two leading human contestants. After three matches, the results were a clear win for Watson: $77,147 to Jennings’ $24,000 and Rutter’s $21,600.

As with Deep Blue’s win against Kasparov, Watson’s Jeopardy win against the human contestants was impressive, but it also showed that Watson was not perfect. Computers still had a long way to go when it came to matching wits with humans.

The Final Round: Go

With the chess and Jeopardy matches under its belt, the computer world wanted another win. Go was seen as the ultimate perfect information game challenge. A win against a human would show that  computers had moved beyond brute force and were taking on human “intuition.” The computer could not simply crunch numbers, it would have to do something else to beat a Go grandmaster. Two competitors took on the challenge, Google and Facebook, and Google got there first with AlphaGo.

In October 2015, AlphaGo played a match against Fan Hui, the European Go champion. AlphaGo won 5–0. While a significant victory for AlphaGo, its next match against Lee Sedol was an even bigger challenge. Sedol watched the games between AlphaGo and Hui and was able to evaluate AlphaGo’s strategies and weaknesses. Sedol predicted that, while the computer was good, he was still better.

As it turned out, Sedol was (mostly) wrong. In fact, most experts were wrong. In 2015 before the match between AlphaGo and Hui, most experts were predicting it would be another decade before a computer could beat a Go grandmaster. But in the few months between the match against Hui and the match against Sedol, AlphaGo continued improving. Unlike a person, AlphaGo could play games continuously and at a furious pace, learning all the time. What it learned gave it the edge.

How did AlphaGo learn to play Go so well? According to researchers at Google’s DeepMind:

AlphaGo was programmed to sift through a database of expert Go moves, and then play against itself millions of times to improve its performance. Researchers called that part of the program the “policy network.” Another part of the program runs through Monte Carlo simulations to evaluate board positions.

Today, Demis Hassabis, who founded DeepMind and still leads it after the Google acquisition, says his team believes AlphaGo could learn to play entirely through self-learning. As Hassabis says:

Actually, the AlphaGo algorithm, this is something we’re going to try in the next few months — we think we could get rid of the supervised learning starting point and just do it completely from self-play, literally starting from nothing. It’d take longer, because the trial and error when you’re playing randomly would take longer to train, maybe a few months. But we think it’s possible to ground it all the way to pure learning.

Because Go is so complex, during training AlphaGo had to learn how to use some measure of computer “intuition.” What do we mean when we say AlphaGo has “intuition”? Geoffrey Hinton, called the “godfather of neural networks” and a member of the AlphaGo team describes it this way:

The really skilled players just sort of see where a good place to put a stone would be. They do a lot of reasoning as well, which they call reading, but they also have very good intuition about where a good place to go would be, and that’s the kind of thing that people just thought compute[r]s couldn’t do. But with these neural networks, computers can do that too. They can think about all the possible moves and think that one particular move seems a bit better than the others, just intuitively. That’s what the feed point neural network is doing: it’s giving the system intuitions about what might be a good move. It then goes off and tries all sorts of alternatives. The neural networks provides you with good intuitions, and that’s what the other programs were lacking, and that’s what people didn’t really understand computers could do.

Does AlphaGo’s victory and use of “intuition” at some level mean computers are getting close to human abilities? According to Hinton, not for more than five years (he refuses to predict anything that he thinks is farther out than five years):

My belief is that we’re not going to get human-level abilities until we have systems that have the same number of parameters in them as the brain. So in the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses. You have about 1,000-trillion synapses—10 to the 15, it’s a very big number. So that’s quite unlike the neural networks we have right now. They’re far, far smaller, the biggest ones we have right now have about a billion synapses. That’s about a million times smaller than the brain.

There will not be another perfect information game challenge that surpasses Go. While there are other strategy games, they involve human language and interactions and other dimensions which still are well beyond computer capabilities.

Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, described AlphaGo’s wins as representing “an outstanding technical achievement, … demonstrat[ing] that when the goal is crystal clear, and the rules of the game are simple … computers will dominate.” Etzioni contrasted that situation with the ones that lawyers confront: “when the problem is ‘ill-defined,’ as in understanding a sentence, writing an article, or even comforting a friend – this is still way beyond our [AI’s] abilities.” Lawyers should not assume Etzioni’s comments mean machine learning computers are not ready for legal services. It turns out there are many ways computers can augment what lawyers do.

Part II of this two-part series will be posted on April 7, 2016.