Please ensure Javascript is enabled for purposes of website accessibility

Regulators Tackle Deepfakes: New Laws and Policies

Michael Matias
,
Co-Founder & CEO
January 10, 2024

The year began with a growing awareness of the enhanced sophistication and accessibility of deepfake technology. Incidents of manipulated political videos, fraudulent audio scams, and the proliferation of non-consensual deepfake pornography underscored the urgent need for robust regulatory frameworks.

This heightened awareness, coupled with the exponential growth of generative AI tools, catalyzed a flurry of legislative and regulatory activity throughout the year. 

Legislative Actions and the EU AI Act

 

One of the significant developments was the continued progression of the European Union's AI Act. While not finalized in 2023, the Act’s ongoing discussions and move to agreement is heavily focused on regulating high-risk AI systems, including those capable of generating deepfakes.

The debates surrounding the Act highlighted the complexities of balancing innovation with the need to mitigate potential harms. The EU's push for transparency and risk-based regulation set a precedent for other jurisdictions grappling with similar challenges.

In the United States, 2023 saw a surge in state-level legislative efforts. Recognizing the limitations of federal action, several states introduced and passed laws targeting deepfakes, particularly in the context of elections and non-consensual pornography.

These laws varied in scope and stringency, reflecting the diverse approaches to deepfake regulation across the country. The focus on election integrity was particularly pronounced, with many states enacting measures to prevent the use of deepfakes to manipulate voters.

 

Regulatory Agency Responses and Guidance

Regulatory agencies also stepped up their efforts. The Federal Trade Commission (FTC) in the US issued updated guidelines on deceptive AI-generated content, emphasizing the importance of clear and conspicuous disclosures.

The FTC's actions signaled a growing recognition of the need to hold companies accountable for the misuse of AI technologies. Similar actions were seen in other countries, with regulators issuing warnings and guidance on the potential harms of deepfakes. 

Legal Precedents and Court Cases

 

Legal precedents and court cases further shaped the regulatory landscape. While dedicated deepfake legislation was still in its nascent stages, existing laws related to defamation, fraud, and copyright infringement were increasingly applied to deepfake-related offenses.

The rapid growth of generative AI significantly impacted regulatory responses. The ease of access to powerful AI tools amplified the threat of deepfakes, necessitating faster and more comprehensive regulatory action.

Focusing on election integrity was another key trend. With several elections taking place or approaching, regulators prioritized measures to prevent the use of deepfakes for political manipulation.

-

At Clarity we are encouraged to see a push for transparency and disclosure gaining momentum, with increasing calls for clear labeling of AI-generated content. It will help limit the prevalence of deepfakes, but detection tools remain a critical protection measure.