As the 2024 U.S. presidential election nears, the Federal Election Commission (FEC) has taken steps to address the growing threat of AI-generated deepfakes in political advertising.
With AI tools becoming more advanced and accessible, concerns about their potential to mislead voters and undermine democracy have prompted calls for regulation.
The Rising Threat of AI-Deepfakes in Politics
Deepfakes—AI-generated media that fabricates realistic but false content—are increasingly being used in politics. In 2023, the Republican National Committee released an AI-generated video depicting a dystopian future under President Joe Biden, while a super PAC supporting Ron DeSantis used an AI-simulated voice of Donald Trump in an attack ad.
These examples highlight how campaigns are leveraging AI to shape narratives, often blurring the line between reality and fabrication.
Experts warn that deepfakes could mislead voters, erode trust in political communication, and disrupt elections. As campaigns adopt generative AI tools for efficiency and creativity, the risk of deceptive practices grows, making regulation critical.
FEC’s Initial Steps Toward Regulation
In August 2023, the FEC unanimously voted to open a 60-day public comment period on a petition proposing changes to rules on "fraudulent misrepresentation." The amendment would explicitly prohibit candidates and campaigns from using deceptive AI-generated content to mislead voters.
This marks a significant shift after earlier efforts to address deepfakes stalled, including a failed vote in June 2023.
Advocacy groups like Public Citizen have pushed for action, warning that unregulated deepfakes could undermine election integrity. Thousands of public comments submitted to the FEC reflect widespread concern about this issue.
The proposed changes aim to adapt existing regulations to address the unique challenges posed by generative AI technologies. By taking this step, the FEC is acknowledging the need for updated safeguards as digital tools evolve.
Challenges and Limitations
Despite progress, several challenges remain. The FEC’s authority is limited primarily to candidate-to-candidate interactions and campaign finance disclosures. Expanding its jurisdiction to cover deceptive practices by third parties or independent expenditures would likely require congressional action.
Some within the FEC argue that existing rules on fraudulent misrepresentation already cover deceptive practices, including those involving deepfakes. However, proponents of regulation contend that explicitly addressing AI-generated content would provide clarity and deter misuse.
-
“We think the FEC’s move to regulate AI-deepfakes is a critical step toward protecting election integrity. As campaigns increasingly embrace generative AI tools, ensuring transparency and accountability will be essential.”