On Saturday night, Israeli Channel 14 mistakenly aired a manipulated video of former Defense Minister Yoav Gallant—an AI-generated deepfake that appeared to originate from Iranian media sources. The incident, which took place during the channel’s evening newscast, showcased Gallant speaking in Hebrew but with a clear Persian accent. The anchor, recognizing the suspicious nature of the clip, interrupted the broadcast mid-sentence, calling out the video as fabricated.
“On the first sentence I said stop the video. We apologize. This is cooked… These are not Gallant’s words but AI trying to insert messages about the U.S. and the Houthis,” said anchor Sarah Beck live on air.
Shortly after, Channel 14 issued an official statement confirming that the video was aired without prior verification and that an internal investigation was underway.
What Actually Happened?
The video portrayed Gallant stating that “the U.S. will not be able to defeat the Houthis,” a politically charged statement intended to sow confusion and manipulate public sentiment. Although the channel removed the clip within seconds, the damage was already done: the AI-generated video had reached thousands of viewers.
This incident highlights the speed, sophistication, and geopolitical implications of deepfake attacks.
How Clarity Responded — in Real Time
Minutes after the clip aired, our team at Clarity ran the footage through Clarity Studio, our real-time media analysis and deepfake detection platform. The results were clear:
- Manipulation Level: High
- Audio-Visual Inconsistencies: Detected in voice pattern and facial dynamics
- Anomaly Source: Synthetic voice generation with foreign accent simulation
Here’s the detection screenshot from Clarity Studio:

We identified clear mismatches between Gallant’s known voice and speech pattern compared to the clip, along with temporal inconsistencies in facial movement and audio syncing—hallmarks of state-sponsored deepfake manipulation.
Why It Matters
This wasn’t a fringe incident. This was a high-profile deception attempt broadcast on national television. Deepfakes are no longer future threats. They are present-day weapons—used to spread disinformation, manipulate public opinion, and erode trust in media.
And this time, Clarity caught it before the narrative could spiral out of control.
The Takeaway
Broadcasters, law enforcement, and government agencies need tools that can verify audio and video authenticity in real time. This isn’t just about technology—it’s about safeguarding democratic discourse and preventing psychological operations from hostile actors.
At Clarity, we’re building the tools to detect these threats before they become headlines.