We have long trusted the veracity of film. Body cams; video footage from a protestor’s phone – we want to accept them as proof of what happened. But what if we can no longer trust what we see or hear?
So-called deepfakes refer to manipulated videos or other digital representations produced using sophisticated AI deep learning techniques, and are increasingly becoming a mainstay of digital culture. The internet is littered with clips of actor Nicholas Cage in movies he was never in. An eerily real semblance of Tom Cruise has become a TikTok sensation.
Deepfakes can be entertaining, satirical, and even useful – like this video of soccer player David Beckham promoting the Malaria Must Die campaign. But much of the content also has a sinister side.
A University College London report published in August 2020 warned that fake audio and video ranked at the top of 20 ways in which AI can be used for crime. The technology can be used for personal attacks and to fuel disinformation; it also poses worrying national and international security risks.
Deepfakes are the “enemy at the gate,” said Arthur Holland Michel, an associate researcher in security and technology at the UN Institute for Disarmament Research (UNIDIR), speaking at a conference in Geneva on Wednesday.
The Innovations Dialogue, hosted by UNIDIR, sought to increase discussion between industry, academic, and governmental experts about the dangers to international security.
The technology is advancing rapidly, and while different technologies to counter abusive uses of deepfakes are showing promise, they remain a step behind.
“The complexity of the threats requires very complex responses, and we are beyond the point where governments and industry can think of acting or solving the challenges alone,” Giacomo Persi Paoli, head of the security and technology unit at UNIDIR, and one of the dialogue’s moderators, told Geneva Solutions.
Fake images, real dangers
Although there have not been any major international or domestic security incidents so far, the use of deepfakes to wreak political, and even military, havoc is a real possibility.
The Flemish Socialist Party walked a fine line in 2018 when it posted a video of former US president Donald Trump that it had deepfaked. In the video, Trump calls for Belgium to leave the Paris climate agreement. At one point, fake Trump says: "We all know climate change is fake, just like this video." The statement that the video is fake did not appear in the Dutch subtitles, however.
The Flemish Socialist Party supports reducing the environmental footprint and intended the video as a way to “start a public debate” and highlight the need to take action.
[The Flemish Socialist Party, which supports the fight against climate change, created a deepfake video of former US president Trump urging Belgium to leave the Paris Climate Agreement, 20 May 2018]
Deepfakes are easy to make. Researchers at the UN Global Pulse initiative showed in 2019 that UN speeches could be realistically faked in 13 minutes.
Accessible software like FakeApp or Doublicat make it possible for anyone to create their own deepfakes like a pro. Recycle-GAN, developed by researchers at Carnegie Mellon University, is available on the open-source code depository Git-Hub.
[Recycle-GAN software developed by researchers at Carnegie Mellon University turned a video of comedian John Oliver into one depicting comedian Stephen Colbert, 9 August 2018]
Tens of thousands of “adversarial” deepfake videos exist around the globe, Giorgio Patrini, co-founder and CEO of Sensity, an AI fraud detection company, said at the dialogue. Every six months the number of problematic deepfakes doubles.
The very uncertainty about whether a video is real or fake can feed instability. A recent example is the rumor that Trump’s concession speech in January 2021 was faked.
It may become so difficult to recognise what is real or fake that deepfakes may eventually “contaminate” official intelligence gathering and decision making, Persi Paoli warned.
“It is a game of mouse and cat”, Marie-Valentine Florin, executive director of the International Risk Governance Center at the Swiss Federal Institute of Technology Lausanne (EPFL), told Geneva Solutions. “Deepfake creators are always ahead of the deepfake detectors.”
There are ways, however, to expose the imposters. Media can maintain public trust in their video content by encoding signatures into files at each stage of their creation to verify where they come from. An example is Project Origin, developed by the BBC, CBC Radio Canada, Microsoft, and The New York Times.
[The Washington Post released an analysis of manipulated videos that make US speaker of the house Nancy Pelosi appear drunk and compared them to the originals, 24 May 2019]
Distributed ledger technologies like blockchain that publicly verify digital transactions will likely play a role in verifying digital content in future, said Laura Ellis, head of technology forecasting at the BBC, who attended the dialogue.
Forensic analysis software, which scans a video for behavioural and technical irregularities, is also becoming more sophisticated.
DIY checks are possible under some circumstances. Spontaneous movement is difficult for a deep fake to do live convincingly, so one tactic in a livestream situation is to ask a speaker to turn their face, Sensity’s Patrini explained.
Social media platforms can be a first line of defense. Yoel Roth, head of site integrity at Twitter, explained that by looking at how deepfakes spread on their platform they have been able to limit exposure.
Industry and government have taken a “siloed” approach to handling the security implications of deepfakes that UNIDIR’s Persi Paoli hopes will be tackled through more frequent meetings among experts and policymakers, like Wednesday’s innovations dialogue.
While the EU is working on a general AI strategy, deepfakes have not been incorporated into multilateral processes in any significant manner, he said.
Regulating an “all-purpose technology” – one that is extensively used for legitimate purposes, including entertainment and education – poses a difficulty, he explained, because you cannot ban the technology completely.
It is about regulating applications and actors, EPFL’s Florin said. This includes regulating media, social platforms, and deepfake creators.
Regulations focused on deepfakes are being developed on a case-by-case basis, she explained. One example she gave is a law California passed last year banning deepfakes during the election season. She said she was not certain whether such a law could remain on the books in the US in light of the first amendment, which protects free speech.
Copyright, libel, and data privacy laws are applicable to certain deepfake security incidents, but not all, she said.
Role for the UN
Disparities between countries’ legal systems makes international regulation of deepfakes near-impossible, Florin said. But there is room to foster greater digital trust at the international level.
“We need to work on hierarchies of trust”, said Amandeep Singh Gill, director of the International Digital Health and AI Research Collaborative (I-DAIR) at the Graduate Institute of International and Development Studies in Geneva. He called for clarification of the legal landscape to guide users, consumers, and victims of deepfakes.
The UN can help by connecting more diverse stakeholders in its processes, said Izumi Nakamitsu, UN under-secretary general and high representative for disarmament affairs.
The latest series of UN working groups and governmental expert sessions on international security focused on digital challenges. The talks spanned June 2019 to May 2021 and included the first multi-stakeholder event on the topic in December 2019. The new working group in December of this year offers an opportunity to discuss deepfakes.
Nakamitsu also envisions a proactive role for the UN in tackling deepfakes. The UN should not just convene and facilitate discussions, she said. It should also contribute ideas for solutions.
“The international community now urgently needs to prioritise issues of digital trust and security, and we need to take concrete steps to protect and promote shared standards of truth if we want to harness the transformative benefits of the digital revolution,” Robin Geiss, UNIDIR’s director said.