Science fiction has a plethora of ideas about what happened in the past and what to expect from the future. Unfortunately, not all of those ideas are exactly plausible in reality. In Suspension of Disbelief, we’ll take a look at the best ideas from sci-fi movies, books, comics and videogames to see where (and if) they intersect with the real world.
In 2001: A Space Odyssey, HAL 9000 plays chess with astronaut Frank Poole. Poole makes a move and HAL explains that in the next three moves he’ll have checkmate. Poole resigns, not surprised in the least that he’s lost, and HAL politely says, “Thank you for a very enjoyable game.” In the novel version of the story, it’s said that HAL is programmed to lose fifty-percent of the games he plays in order to not demoralize the astronauts. In 1968, at the time of the film and novel’s release, computers were able to play well in amateur tournaments, but none were close to competing at professional levels, and the idea of a computer playing so perfectly that it would have to be programmed to lose was a distant dream.
As long as humans have building machines to mimic our physical form, we’ve been trying to make them mimic our minds as well, and, in these early days of artificial intelligence, one of the best ways to measure how a machine parses information and makes decisions is by playing games. Even though games are sometimes characterized as superfluous, they exercise our brains in creative ways. They demonstrate the complexity, long-term planning, memory, adaptability and elasticity of the human brain, and coding all of that into the workings of a machine is a challenge that’s only been recently conquered.
Machines have long since passed humans in competitions of strength, speed and physical endurance, but up until the last few years, games felt like one of the last refuges where humanity hadn’t been surpassed by its own creations. But as this short history proves, computers are already superior at games, and soon they’ll be applying the lessons learned in checkers, chess and more to numerous other areas, including sales negotiations, recognizing faces and military strategy. Let’s just hope they remain as polite as HAL was during his chess match.
Hailing from upstate New York, Cameron Wade is a freelance writer interested in movies, videogames, comic books and more. You can find his work at protogeektheblog.wordpress.com.
1 of 6
The Mechanical Turk
Riding the recent wave of popularity of automatons, Wolfgang von Kempelen built the Automaton Chess Player in 1770 for the Empress Maria Theresa of Austria. The Mechanical Turk, as he was often called because of his robes and turban, sat at a desk that had a chessboard on top of it and cabinets in its sides that seemed to show that it was filled with gears, cogs and machinery. Using its left hand to pick up and move pieces, the Turk could beat nearly any opponent it faced. At the sight of a machine that seemed to think and plan as well as any human, some spectators believed the machine was possessed with supernatural powers or that Kempelen was controlling it via magnets. Kempelen would wander around the room and invite spectators to put magnets on the desk to prove he wasn't in control of it.
The Turk toured Europe beating opponent after opponent, including Benjamin Franklin, who was working as an ambassador in Paris. When Kempelen died, the Turk was bought by Johann Mälzel who continued to tour it, most notably playing Napoleon, who angered the Turk so much by playing illegal moves that the Turk scattered the chess pieces to the ground. Mälzel took the Turk on an American tour, where even Edgar Allan Poe attempted to deduce how it worked. Mälzel died on his trip and The Turk eventually wound up in the Peale Museum in Baltimore where it was destroyed in a fire in 1854. After the Turk's demise, a previous owner's son wrote a series of articles for Chess Monthly explaining the Turk's secrets: there was no supernatural or mechanical force that drove the Turk. Like many had guessed, including Poe, Kempelen and Mälzel would hire master chess players to hide in the desk. From inside, they could view the chessboard from below and control the Turk using a lever that moved its arms and fingers. As many suspected and assumed, machines weren't yet capable of mimicking the human intelligence needed to play chess.
Photo via Wikipedia
2 of 6
OXO
Tic-tac-toe is a simple game for humans to play well, so it's not much of a task for computers, including ones built over 60 years ago. In 1952, Alexander S. Douglas was working on his thesis on human-computer interaction for his Ph.D. at the University of Cambridge. Douglas coded a game of tic-tac-toe to play on the university's Electronic Delay Storage Automatic Calculator, one of the first computers to have stored memory that could be written to and read from. The game was played using a rotary phone dial to select the box players wanted to fill in. One of the computer's three 35 x 16 cathode ray tube screens was repurposed to show the game board to the player. Since Douglas' game displayed information about the game state using electronic visuals, it could be argued that Douglas' game was the first video game ever made.
After his thesis, Douglas discarded the game but it was recovered via the paper punch tape it was coded onto and dubbed OXO. But even back in 1952, the computer was capable of playing tic-tac-toe perfectly. If a human player didn't make any mistakes, the best they could do against the machine was to draw. Tic-tac-toe is a solved game, every possible permutation of the game is known and accounted for. Unfortunately, since the computer was one of a few in existence and it was exclusively used for academic research, the game was never seen by the public. Douglas got his Ph.D., but he never programmed another game. Now, programming a computer to play tic-tac-toe is a beginner test for artificial intelligence.
Photo via Computer Laboratory, University of Cambridge/Image via Lifewire
3 of 6
Chinook
In the early 1990s, playing chess at a master level was past the limitations of computers, but checkers, a simpler game with a smaller board, was within reach. Jonathan Schaeffer and his team at University of Alberta built Chinook to play checkers and the machine won second place in the U.S. Nationals in 1990, qualifying it to compete for the national title against Marion Tinsley, generally considered the greatest to ever play, far ahead of even the next best player at the time. Chinook was actually eight computers, housed in a refrigerator-sized case, that all worked simultaneously to determine the best move, processing 24 moves ahead of its current position. Tinsley could reportedly see 30 to 60 moves ahead and chalked up his advantage to having "a better programmer: God."
The two held their best-of-40 tournament in 1992. At the 18th game, Tinsley was down two-to-one (14 games were drawn), when his fortune changed thanks to Chinook freezing up. Tournament rules dictated that each player had to make 20 moves per hour of play and Schaeffer and his team were unable to get Chinook going again in time. After that, Tinsley won two more games to win the competition, besting his computer opponent. In 1994, Tinsley and an upgraded Chinook held a rematch, but it ended after six games when Tinsley became ill. He was diagnosed with pancreatic cancer and died the next year. In 1996, Chinook won the U.S. National Tournament by the widest margin ever and the man vs. machine title hasn't been contested since.
Photo via jonathanschaeffer.blogspot.com
4 of 6
Deep Blue
Checkers has a game-tree complexity of about 10^40, meaning that there are 10,000,000,000, 000,000,000,000, 000,000,000,000, 000,000 possible unique games that can be played. Chess, on the other hand, has a game-tree complexity of about 10^124. That substantial increase in complexity is what made it so difficult for a program to master, and yet, just one year after Chinook proved computers were superior at checkers than humans, IBM built its own chess-playing supercomputer to take on the world's reigning chess champion, Gary Kasparov. Kasparov and Deep Blue first met in 1996 for a best-of-seven match where Kasparov was able to limit Deep Blue's options, keeping it from building long-term strategies by keeping its pieces locked into the corner and forcing it to make useless, repetitive moves. After a round of updates and improvements, Deep Blue once again faced Kasparov in 1997. Kasparov won the first game and lost the second. Though Kasparov was ahead in each of the next three games, Deep Blue managed to pull out a draw in all three. Going into the sixth game Deep Blue and Kasparov were tied, victory for either would clinch the tournament. Kasparov fell for a knight sacrifice early on and Deep Blue decimated his defense, ending the game in just 19 moves. It was the first time a computer had ever beaten a reigning chess champion under tournament conditions.
Kasparov recently commented that, "Today you can buy a chess engine for your laptop that will beat Deep Blue quite easily." Once computers had proven themselves superior in chess, a game seen often seen in the West as emblematic of humanity's intelligence, the field of artificial intelligence set its sights on another board game: the ancient Chinese game of go.
Photo via Adam Nadel/Associated Press
5 of 6
AlphaGo
Go is a popular board game in China, Korea and Japan, in which players place colored stones on a board to surround and remove their opponent's pieces, similar to reversi. Its rules are actually quite simple, but that simplicity belies an immense complexity. Go's game-tree complexity is about 10^360, which is more possible games than there are atoms in the observable universe. There are generally 150 to 250 possible moves in each turn of Go, compared to chess's average of 37 moves. A computer would have to process 320 billion possible combinations to try to predict the next four moves of a game, a feat that would take the most powerful supercomputer in the world 2 hours to complete. But just as chess was seen as an unconquerable problem for computers before the 1990s, so too has Go been overcome by artificial intelligence.
Google's dedicated artificial intelligence company DeepMind built AlphaGo, a machine that taught itself how to play Go. Because of the freedom of choice and huge number of possible moves in a game of Go, AlphaGo can't use brute force to run through every possible permutation of a given scenario like Chinook or Deep Blue did. Instead, it taught itself which moves are and aren't effective by studying hundreds of thousands of online Go games between humans and then playing itself over and over again to deduce the most effective strategies.
In March 2016, AlphaGo faced Lee Sedol, a professional Go player ranked second in the world. But not only did AlphaGo beat Lee, it swept the tournament four-to-one. The fourth game, in which Lee claimed his only victory, has been called one of the greatest matches of Go ever played. Lee, who went into the match expecting to sweep the A.I., was surprised at his loss, but said that "robots will never understand the beauty of the game the same way that we humans do." And while AlphaGo may not be able to enjoy the game like HAL can with chess, its ability to learn independently and parse a huge amount of variables could be applied to a number of different tasks, like recognizing faces and driving cars.
Photo via Popular Science
6 of 6
Libratus
Games like checkers, chess and Go were major hurdles for computer intelligence to overcome, but they do have some qualities that make them easier for computers to process. Perhaps most importantly, they're perfect information games, meaning that all players know the state of the game at any given time. This makes it easier for the computer to plot its future moves because it can also predict what options are available for its opponent. But a game like poker doesn't give the computer such information. Poker is often seen as a game of human intuition and a study of character, but in January, a computer named Libratus beat four professional poker players, taking home a final pot of $1.76 million. In a 20-day tournament, Libratus played 120,000 hands of Texas hold 'em poker against one opponent at a time.
Created by a team at Carnegie Mellon University, Libratus is able to bluff and to randomize its bets enough so its opponents can't find any pattern in its bets. Going into the tournament, Libratus was a four-to-one underdog according to oddsmakers, but after its decisive win co-creator Tuomas Sandholm said, "The best A.I.'s ability to do strategic reasoning with imperfect information has now surpassed that of the best humans." Uniquely, Libratus didn't learn how to play poker from watching others. Instead, it was just given the rules and then taught itself by playing trillions of hands against itself to figure out optimal strategies. And since it taught itself how to play, it can teach itself how to improve. After playing each of the pros, Libratus would review how well it did and "algorithmically patch" the three biggest mistakes it made. Because these algorithms aren't specific to poker, Libratus can theoretically be applied to any number of tasks and teach itself how to do them, including negotiating sales and arranging military and security strategy.
Photo via Carnegie Mellon University