Tech Policy

The news: Twitter has drafted a deepfake policy that would warn users about synthetic or manipulated media, but not remove it. Specifically, it says it would place a notice next to tweets that contain deepfakes, warn people before they share or like tweets that include deepfakes, or add a link to a news story or Twitter Moment explaining that it isn’t real. Twitter has said it may remove deepfakes that could threaten someone’s physical safety or lead to serious harm. People have until November 27 to give Twitter feedback on the proposals.

The context: It’s become relatively easy to make convincing doctored videos thanks to advances in artificial intelligence. That’s led to a huge panic over the potential for deepfakes to subvert democracy, as they can be used to make politicians seem to say or do whatever the creator wants.

A real threat?:The most notorious political deepfakes so far either have not been deepfakes (see the Nancy Pelosi video released in May) or have been created by people warning about deepfakes, rather than any bad actors themselves. For example, in the UK today two new deepfakes have been released of the prime minister, Boris Johnson, and leader of the opposition, Jeremy Corbyn, endorsing each other for an upcoming election on December 12. But they were created by a social enterprise trying to raise awareness of the issue.

The real problem: There is no denying that deepfakes pose a significant new threat. But so far, they’re mostly a threat to women, particularly famous actors and musicians. A recent report found that 96% of deepfakes are porn, virtually always created without the consent of the person depicted. These would already break Twitter’s existing rules and be removed.

An issue for the whole industry: That said, it is refreshing to see a social-media company wrangling with its content moderation responsibilities so openly. The varying responses to the Pelosi video (YouTube removed it, Facebook flagged it as false, and Twitter let it stand) show what a complex, thorny problem manipulated videos can pose. And unfortunately, we can’t expect deepfake detection technology to fix it, either. We’ll need social and legal solutions,too.

Sign up here for our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.