` Hao Li, 32
Smarter animation bridges the gap between the physical and digital worlds.
Hao Li remembers watching Jurassic Park as a kid: “That moment of seeing something that didn’t exist in reality, but it looked so real—that was definitely the one that made me think about doing this,” he says. Li tells me the story one afternoon while we dine at the cafeteria of Industrial Light & Magic, the famed San Francisco visual-effects studio where he has been working on a way to digitally capture actors’ facial expressions for the upcoming Star Wars movies. When Jurassic Park came out, Li was 12 years old and living in what he calls the “boonie” town of Saarbrücken, Germany, where his Taiwanese parents had moved while his father completed a PhD in chemistry. Now, 20 years later, if all goes to plan, Li’s innovation will radically alter how effects-laden movies are made, blurring the line between human and digital actors.
Visual-effects artists typically capture human performances through small balls or tags that are placed on an actor’s face and body to track movement. The data capturing the motion of those markers is then converted into a digital file that can be manipulated. But markers are distracting and uncomfortable for actors, and they’re not very good at capturing subtle changes in facial expression. Li’s breakthrough involved depth sensors, the same technology used in motion gaming systems like the Xbox Kinect. When a camera with depth sensors is aimed at an actor’s face, Li’s software analyzes the digital data in order to figure out how the facial shapes morph between one frame and the next. As the actor’s lips curl into a smile, the algorithm keeps track of the expanding and contracting lines and shadows, essentially “identifying” the actor’s lips. Then the software maps the actor’s face onto a digital version. Li’s work improves the authenticity of digital performances while speeding up production.
Li is amiably brash, unembarrassed about proclaiming his achievements, his ambitions, and the possibilities of his software. His algorithm is already in use in some medical radiation scanners, where it keeps track of the precise location of a tumor as a patient breathes. In another project, the software has been used to create a digital model of a beating heart. Ask him if his technology can be used to read human emotions or if he’ll find some other far-off possibility, and he’s likely to say, “I’m working on that, too.”
When I ask if he speaks German, Li smiles and says he does—“French, German, Chinese, and English.” This fall, he will begin working in Los Angeles as an assistant professor in a University of Southern California computer graphics lab. But Hollywood movies are not the end game. “Visual effects are a nice sandbox for proof of concepts, but it’s not the ultimate goal,” Li says. Rather, he sees his efforts in data capture and real-time simulation as just a step on the way to teaching computers to better recognize what’s going on around them.
—Farhad Manjoo