top of page

Deep Fakes, Elections, and Safeguarding Democracy

Understanding and Preventing the Threat of Deep Fakes in Elections

By Dr. Tommy Cooke

Oct 11, 2024

Key Points: 


  • Deep Fakes use AI to create hyper-realistic fake media, posing serious risks to elections  

  • They can manipulate voter perceptions and erode trust in democratic institutions  

  • Organizations and governments must invest in detection tools, media literacy, and rapid response protocols to combat misinformation 

As the 2024 US election approaches, the integrity of the democratic voting processes is under threat from Deep Fakes. In January 2024, a robocall impersonating Joe Biden spread misinformation about the election process. It demonstrated how easily AI can be used to manipulate voters. The incident highlights the urgency with which governments, organizations, and citizens need to take in understanding Deep Fakes – and undertaking proactive measures to combat them. 

What Are Deep Fakes? 

Deep Fakes are AI-generated content, usually in the form of images, videos, audio recordings. They are designed to closely replicate the appearance, voice, and mannerisms of real people. Deep Fakes are often generated through a class of AI Machine Learning, called Generative Adversarial Networks (GANs), where two neural networks compete with one another. One neural network (the generator) creates a fake image, recording, or video. The other (the discriminator) tries to spot if they are fake. The two networks continue competing with one another, improving over time until the generator produces fake content that the discriminator finds difficult to distinguish from real footage. The process often results in uncanny resemblances that are hyper-realistic, misleading the viewer into witnessing and believing a statement or action that never factually occurred. The ongoing proliferation of Deep Fakes raise considerable ethics and security concerns, particularly during elections. 

Why Deep Fakes Threaten Democracy 

Deep Fakes are not merely technological curiosities. They can be a powerful weapon for misinformation, designed to: 

  1. Manipulate Public Perception: Deep Fakes falsely portray political figures making inflammatory statements or engaging in unethical behavior, leading to confusion while eroding voter trust. The January 2024 robocall that mimicked Joe Biden is a clear example. 

  2. Erode Trust in Democratic Institutions: As more Deep Fakes are produced, they generate increasingly higher levels of suspicion and confusion around what is real and what is not. The atmosphere of doubt and uncertainty that Deep Fakes create thus undermine the credibility of both political candidates and the electoral process. In an age where misinformation already spreads rapidly on social media platforms, Deep Fakes are particularly lethal in their ability to sow distrust in an otherwise informed and engaged electorate. 

  3. Distort Voter Behavior: When Deep Fakes are curated for specific audiences (e.g., traditionally Democratic voters), adversaries of the democratic process specifically manipulate certain voters into questioning the validity of the voting process or even the value of their own vote. This targeted approach can significantly alter or shift voter behavior in way that is detrimental to both political parties.  

The 2024 Election: A New Era for Deep Fakes 

As we head into the 2024 United States presidential election in November, the threat of Deep Fakes is unprecedently high. In today’s polarized political climate, Deep Fakes have the potential to escalate tensions by spreading false narratives that align with partisan biases. The current political climate is divisive, and Deep Fakes threaten to make divisions deeper and wider. It is imperative that governments and organizations adhere to best practices to combat Deep Fakes:  

  1. Monitoring and Detection Governments and organizations can invest in advanced AI tools capable of detecting Deep Fakes in real-time. Detection algorithms can reverse engineer how Deep Fakes are created to flag suspicious content before they gain widespread traction.  

  2. Media Literacy  One of the most effective ways to mitigate the impact of Deep Fakes is through public education. Media literacy campaigns equip voters with the that skills they need to critically evaluate the content they encounter. 


  3. Rapid Response Governments and election bodies must have clear strategies in place for when Deep Fakes emerge. Rapid response protocols can include issuing statements to correct misinformation, collaborating with tech companies to remove malicious content, and engaging the public through verified communication channels.  

  4. Cross-Sector Collaboration Governments, tech companies, and media organizations should work together to create transparent systems that verify the authenticity of election-related media. Public-private partnerships can help develop common standards for content verification and share expertise in detecting Deep Fakes. 

Proactive Measures for Future Elections 

Because Deep Fakes become more commonplace on social media platforms, organizations, governments, and citizens need to recognize the urgency of this issue – and act proactively:  

  • Seek Independent Fact-Checking Organizations: Governments should work closely with fact-checking bodies to ensure swift and accurate debunking of manipulated content.  

  • Promote Research and Development: Supporting innovation in the Deep Fake detection and prevention space is crucial. Governments, organizations, and citizens can stay ahead of emerging threats by investing in tools that protect the democratic process. 

The risks to democratic integrity are real. Without concerted efforts, the impact of Deep Fakes can reshape the political landscape in exceptionally harmful ways. By investing in monitoring systems, educating the public, and establishing rapid response protocols, we can mitigate these risks and protect the foundations of our democratic processes. 

bottom of page