Please ensure Javascript is enabled for purposes of website accessibility

White House Executive Order Demands Watermarking of AI Content

Lior Ben Moha
,
Data Operations
December 4, 2023

President Joe Biden signed Executive Order 14110, marking a significant step in regulating artificial intelligence (AI) in the United States. This comprehensive directive, titled the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," focuses on addressing the risks posed by AI technologies while promoting their responsible development.

Among its many provisions, the order mandates the use of digital watermarking for AI-generated content as a means to combat misinformation and ensure content authenticity.

 

The Role of Watermarking in AI Regulation

 

Watermarking is a technique used to embed information into digital outputs, such as images, videos, audio clips, and text, to verify their authenticity and trace their origins.

The executive order directs the Department of Commerce to develop guidance for watermarking and other content authentication methods. This initiative aims to help users distinguish between real and AI-generated content, addressing concerns about deepfakes and disinformation that have proliferated with advancements in generative AI technologies.

The Biden administration views watermarking as a critical tool for mitigating risks associated with synthetic content. Federal agencies are expected to adopt these tools to ensure that communications they produce are authentic.

This move is intended to set an example for private companies and governments worldwide. However, experts caution that watermarking alone is not a foolproof solution. Current technologies can be bypassed or manipulated, raising concerns about their reliability in practice.

 

Broader Implications of the Executive Order

The executive order reflects a broader effort by the Biden administration to establish governance frameworks for AI in the absence of comprehensive federal legislation.

It builds on previous initiatives such as the "Blueprint for an AI Bill of Rights" released in 2022 and voluntary commitments from major tech companies earlier in 2023. By invoking measures like the Defense Production Act, the order underscores the urgency of addressing national security risks posed by advanced AI models.

In addition to watermarking, the directive includes provisions for safety testing of AI systems, privacy protections, and measures to promote fair competition in the AI industry. Federal agencies are tasked with developing sector-specific guidelines to address issues such as consumer protection, labor rights, and cybersecurity.

The National Institute of Standards and Technology (NIST) is also directed to establish standards for testing AI models before their deployment.

 

Challenges and Criticisms Of The Executive Order

 

While the executive order has been hailed as a significant step forward in AI regulation, it faces several challenges. One major criticism is its reliance on voluntary compliance by private companies.

The order does not mandate adherence to its guidelines or provide detailed enforcement mechanisms. This raises questions about its effectiveness in addressing immediate risks posed by AI technologies.

Moreover, experts have highlighted technical limitations associated with watermarking. Researchers have demonstrated how current watermarking methods can be circumvented or even used maliciously to insert fake watermarks into content. These vulnerabilities underscore the need for more robust and reliable solutions to authenticate AI-generated outputs.

 

Executive Order 14110 is a move in the right direction – but it won’t stop the ongoing emergence of deepfakes right away. AI-powered deepfake detection will, for now, remain an important way to mitigate the risks posed by malicious AI-generated content.