Please ensure Javascript is enabled for purposes of website accessibility
Back

Looking At Deepfakes and Election Security In 2024

Doron Ish Shalom
,
Head of BizDev & Strategic Partnerships
October 1, 2024

Deepfake Biden Robocalls: Election Interference

In early 2024, New Hampshire voters were targeted by a deepfake robocall mimicking U.S. President Joe Biden’s voice. The call falsely urged Democrats to abstain from voting in the state’s presidential primary, claiming their participation would inadvertently aid Republican efforts to re-elect Donald Trump.

The audio was so convincing that there was a risk that many listeners believed it was authentic. Investigations traced the deepfake to tools provided by ElevenLabs, an AI voice-cloning company, though the firm rapidly suspended the responsible user’s account and denied any intent to facilitate misuse.

The incident underscores how deepfakes can be used to suppress voter turnout and manipulate public opinion. By mimicking trusted figures, such as a sitting president, these technologies exploit the public's reliance on familiar voices and faces to discern truth from falsehood.

Deepfakes and Political Disinformation

Deepfakes represent a significant escalation in the disinformation landscape. Unlike traditional forms of fake news or doctored images, deepfakes leverage neural networks and generative adversarial networks (GANs) to create highly realistic but entirely fabricated content. This has implications for politics:

  • Erosion of trust: Deepfakes blur the line between reality and fiction, making it harder for citizens to trust what they see or hear. Studies show that people struggle to identify deepfakes accurately, with detection rates no better than random guessing.
  • Targeting public figures: Deepfakes have been used to discredit political leaders globally. For instance, Ukrainian President Volodymyr Zelenskyy was falsely depicted in a video ordering his troops to surrender during the ongoing conflict with Russia.
  • Amplifying social divisions: Manipulated content can exploit ethnic or political tensions, as seen in India and Moldova, where deepfakes have been used to undermine female politicians or ridicule pro-Western leaders.

Challenges in Combating Deepfake Disinformation

Despite their potential for harm, most deepfakes are detectable with current forensic tools or debunked quickly on social media platforms. However, the rapid advancement of AI technology poses significant challenges:

  • Detection difficulties: As GANs improve, distinguishing real from fake content becomes increasingly difficult. Current detection methods, such as analyzing unnatural facial movements or metadata watermarks, may soon become obsolete.
  • Regulatory gaps: Governments worldwide lack comprehensive frameworks to regulate AI-generated content effectively. While some tech companies have pledged voluntary measures—such as labeling AI-generated media—enforcement remains inconsistent.
  • Public awareness: Many voters remain unaware of how easily AI can fabricate convincing disinformation. This lack of awareness makes individuals more susceptible to manipulation during critical moments like elections.

A Call for Action

The rise of deepfakes requires action from policymakers, tech companies, and civil society. Governments must establish clear legal frameworks requiring transparency in AI-generated content.

Investment in advanced forensic technologies is also important to keep pace with evolving threats. Raising awareness about the risks of deepfakes can empower citizens to critically evaluate digital content.

-

Tech companies must work together to develop standards for identifying and mitigating harmful uses of AI. Tools to detect and stop deepfake attacks play a key role in flagging deepfake political content before it is widely disseminated.

 

Latest AI Deepfake articles