This character learned how to perform various acrobatic feats by observing a human.
Berkeley Artificial Intelligence Research

Artificial Intelligence / Machine Learning

Virtual robots that teach themselves kung fu could revolutionize video games

Machine learning may make it much easier to build complex virtual characters.

Apr 10, 2018
This character learned how to perform various acrobatic feats by observing a human.
Berkeley Artificial Intelligence Research

In the not-so-distant future, characters might practice kung-fu kicks in a digital dojo before bringing their moves into the latest video game.

AI researchers at UC Berkeley and the University of British Columbia have created virtual characters capable of imitating the way a person performs martial arts, parkour, and acrobatics, practicing moves relentlessly until they get them just right.

The work could transform the way video games and movies are made. Instead of planning a character’s actions in excruciating detail, animators might feed real footage into a program and have their characters master them through practice. Such a character could be dropped into a scene and left to perform the actions.

The same algorithm can be used to teach a wide range of challenging physical skills.
Berkeley Artificial Intelligence Research

“An artist can give just a few examples, and then the system can then generalize to all different situations,” says Jason Peng, a first-year PhD student at UC Berkeley, who carried out the research.

The virtual characters developed by the AI researcher use an AI technique known as reinforcement learning, which is loosely modeled on the way animals learn (see “10 Breakthrough Technologies 2017: Reinforcement Learning”).

The researchers captured the actions of expert martial artists and acrobats. A virtual character experiments with its motion and receives positive reinforcement each time it gets a little closer to the motions of that expert. The approach requires a character to have a physically realistic body and to inhabit a world with accurate physical rules.

Jason Peng | YouTube

It means the same algorithm can train a character to do a backflip or a moonwalk. “You can actually solve a large range of problems in animation,” says Sergey Levine, an assistant professor at UC Berkeley who’s involved with the project.

The computer-generated characters in high-budget video games and movies might look realistic, but they are little more than digital marionettes, following a painstakingly choreographed script.

The animation and computer games industries are already exploring the use of software that automatically adds realistic physics to characters. James Jacobs, CEO Ziva Dynamics, an animation company that specializes in building characters with realistic physical characteristics, says reinforcement learning offers a way to bring realism to behavior as well as appearance. “Up until this point people have been leaning on much simpler approaches,” Jacobs says. “In this case you are training a computation model to understand the way a human or a creature moves, and then you can just direct it, start applying external forces, and it will adapt to its environment.”

The reinforcement learning process involves making gradual progress—and the odd fall.
Berkeley Artificial Intelligence Research

The approach could have benefits that go beyond video games and special effects. Real robots may eventually learn to perform complex tasks with simulated practice. A bot might practice putting a table together in simulation, for instance, before trying it in the real world.

Levine says the robots could end up teaching us some new tricks. “If somebody wants to do some sort of gymnastics thing that nobody has ever tried before, in principle they could plug it into this and there’s a good chance something very reasonable would come out,” he says.