Fake faces made by Nvidia's researchers.

Artificial Intelligence / Machine Learning

These incredibly realistic fake faces show how algorithms can now mess with us

A new approach to AI fakery can generate incredibly realistic faces, with whatever characteristics you’d like.

Dec 14, 2018
Fake faces made by Nvidia's researchers.

The faces above don’t seem particularly remarkable. They could easily be taken from, say, Facebook or LinkedIn. In reality, they were dreamed up by a new kind of AI algorithm.

Nvidia researchers posted details of the method  for producing completely imaginary fake faces with stunning, almost eerie, realism (here’s the paper).

The researchers, Tero Karras, Samuli Laine, and Timo Aila, came up with a new way of constructing a generative adversarial network, or GAN.

GANs employ two dueling neural networks to train a computer to learn the nature of a data set well enough to generate convincing fakes. When applied to images, this provides a way to generate often highly realistic fakery. The same Nvidia researchers have previously used the technique to create artificial celebrities (read our profile of the inventor of GANs, Ian Goodfellow).

Nvidia makes the computer chips that are crucial to artificial intelligence, but the company also employs an army of software engineers to develop useful tools and to experiment with new ways of using its hardware.

Nvidia's fake celebrity faces (top two rows), and its new, more realistic fake faces below.
Nvidia

The images below shows how much of an improvement the new work is.

In the most recent work, the researchers took inspiration from a technique known as style transfer to built their GAN in a fundamentally different way. This allowed their algorithm to identify different elements of a face, which the researchers could then control.

A video produced by the researchers shows how the approach can also be used to play with, and remix, different elements, like age, race, gender, or even freckles.

“It surely seems like another big quality leap for GANs,” says Mario Klingemann, an artist and coder who uses GANs in his work. “It also appears to be amazingly controllable, unlike GANs so far where you have to experimentally figure out how to steer the results into a certain direction (like making a face smile or age it).”

Klingemann says he is keen to get his hands on the code, and to experiment with it for artistic purposes. “I am very interested to find out how to make that model do ‘wrong’ things,” he says.

GANs are likely to change the way video games and special effects are generated. The approach could conjure up realistic textures or characters on demand. Nvidia recently showed a project that uses GANs to synthesize the appearance of objects in a scenes in real time within a driving game. 

Adobe also has a project that uses GANs to improve the realism of images after they have been manipulated, removing artifacts that can easily be introduced. GANs can also be used to sharpen up degraded images or video. 

But the work is also a striking example of how advances in machine learning are leading to all sorts of new possibilities for fakery. We  wrote about the potentially for video fakery to harm political discourse in a special issue dedicated to politics earlier this year (see “Fake America great again”).