The news: An artificial intelligence called Agent57 has learned to play all 57 Atari video games in the Arcade Learning environment, a collection of classic games that researchers use to test the limits of their deep-learning models. Developed by DeepMind, Agent57 uses the same deep reinforcement learning algorithm to achieve superhuman levels of play even in games that previous AIs have struggled with. Being able to learn 57 different tasks makes Agent57 more versatile than previous game-playing AIs.
What’s in a game? Games are a great way to test AIs. They provide a variety of challenges that force an AI to come up with a range of strategies and yet still have a clear measure of success—a score—to train against. But four Atari games in particular have proved tough to beat. In Montezuma’s Revenge and Pitfall, an AI must try a lot of different strategies before hitting on a winning one. And in Solaris and Skiing there can be long waits between action and reward, making it hard for an AI to learn which moves earn the best payoff.
Meta-mind: To meet these challenges, Agent57 brings together multiple improvements that DeepMind has made to its Deep-Q network, the AI that first beat a handful of Atari games back in 2012, including a form of memory that lets it base decisions on things it has previously seen in the game and reward systems that encourage the AI to explore its options more fully before settling on a strategy. These various techniques are then managed by a meta-controller, which balances the trade-offs between going ahead with a particular strategy and doing more exploration.
Why it matters: For all their success, the best deep-learning models we have today are not very versatile. Most tend to be good at one thing and one thing only. Training an AI to excel at more than one task is one of the biggest open challenges in deep learning. The ability to learn 57 different tasks makes Agent57 more versatile than previous game-playing AIs, but—and this often gets missed—it still can’t learn to play more than one game at a time. Agent57 can learn to play 57 games, but it cannot learn to play 57 games at once. It needs to retrain for each new game even though it can use the same algorithm to do so. In this way Agent57 is similar to AlphaZero, DeepMind’s deep reinforcement learning algorithm, which can learn to play chess, Go, and shogi—but again, not all at once. True versatility, which comes so easily to a human infant, is still far beyond AIs’ reach.