Jeremy Portje

Artificial Intelligence

Deepfakes are solvable—but don’t forget that “shallowfakes” are already pervasive

Malicious synthetic videos are not yet mainstream, which gives tech companies a chance to prevent future misinformation. But it won’t fix our current flood of fake news.

Mar 25, 2019
Jeremy Portje

The technology industry has a unique opportunity to tackle “deepfakes”—the problem of fake audio and video created using artificial intelligence—before they become a widespread problem, according to human rights campaigner Sam Gregory.

But, he warns, major companies are still a very long way from tackling the pervasive and more damaging issue of cruder “shallowfake” misinformation.

Gregory is a program manager at Witness, which focuses on the use of video in human rights—either by activists and victims to expose abuses, or by authoritarian regimes to suppress dissent. Speaking on Monday to an audience at EmTech Digital, an event organized by MIT Technology Review, he said that the deepfakes we’re currently seeing are “the calm before the storm.”

“Malicious synthetic media are not yet widespread in usage, tools have not yet gone mobile—they haven’t been productized,” said Gregory. This moment, he suggested, presents an unusual opportunity for the deepfakes’ creators to establish ways to combat them before bad actors are able to deploy the technology widely.

“We can be proactive and pragmatic in addressing this threat to the public sphere and our information ecosystem,” he said. “We can prepare, not panic.”

While deepfakes may be some way ahead of the mainstream, however, there is already a problematic flood of misinformation that has not yet been solved. Fake information today does not generally use AI or complex technology. Rather, simple tricks like mislabeling content to discredit activists or spread false information can be devastatingly effective, sometimes even resulting in deadly violence, as happened in Myanmar.

“By these ‘shallowfakes’ I mean the tens of thousands of videos circulated with malicious intent worldwide right now—crafted not with sophisticated AI, but often simply relabeled and re-uploaded, claiming an event in one place has just happened in another,” Gregory said.

For example, he said, one video of a person being burned alive has been reused and reattributed to actors in Ivory Coast, South Sudan, Kenya, and Burma, “each time inciting violence.”

Another threat was the rising idea that we cannot trust anything we see, which is “in most cases simply untrue,” said Gregory. The spread of this idea “is a boon to authoritarians and totalitarians worldwide.”

“An alarmist narrative only enhances the real dangers we face: plausible deniability and the collapse of trust,” he added.

Mark Latonero, human rights lead at Data & Society, a nonprofit institute dedicated to the applications of data, agreed that technology companies should be doing more to tackle such issues. While Microsoft, Google, Twitter, and others have employees focused on human rights, he said, there was so much more they should be doing before they deploy technologies—not after.

“Now is really the time for companies, researchers, and others to build these very strong connections to civil society, and the different country offices where your products might launch,” he said. “Engage with the people who are closest to the issues in these countries. Build those alliances now. When something does go wrong—and it will—we can start to have the foundation for collaboration and knowledge exchange.”