AI could generate faces that match the expressions of anonymous subjects to grant them privacy—without losing their ability to express themselves.

The news: A new technique uses generative adversarial networks (GANs), the technology behind deepfakes, to anonymize someone in a photo or video.

How it works: The algorithm extracts information about the person’s facial expression by finding the position of the eyes, ears, shoulders, and nose. It then uses a GAN, trained on a database of 1.5 million face images, to create an entirely new face with the same expression and blends it into the original photo, retaining the same background.

Glitch: Developed by researchers at the Norwegian University of Science and Technology, the technique is still highly experimental. It works on many types of photos and faces, but still trips up when the face is partially occluded or turned at particular angles. The technique is also very glitchy for video.

Other work: This isn’t the first AI-based face anonymization technique. A paper in February from researchers at the University of Albany used deep learning to transplant key elements of a subject’s facial expressions onto someone else. That method required a consenting donor to offer his or her face as the new canvas for the expressions.

Why it matters: Face anonymization is used to protect the identity of someone, such as a whistleblower, in photos and footage. But traditional techniques, such as blurring and pixelation, run the risk of being incomplete (i.e., the person’s identity can be discovered anyway) or completely stripping away the person’s personality (i.e., by removing facial expressions). Because GANs don’t use the subject’s original face at all, they eliminate any risk of the former problem. They can also re-create facial expressions in high resolution, thus offering a solution to the latter.

Not always the bad guy: The technique also demonstrates a new value proposition for GANs, which have developed a bad reputation for lowering the barrier to producing persuasive misinformation. While this study was limited to visual media, by extension it shows how GANs could also be applied to audio to anonymize voices.