Please ensure Javascript is enabled for purposes of website accessibility
Back

Advertising CEO Targeted by AI Voice Clone

Michael Matias
,
Co-Founder & CEO
May 11, 2024

In a demonstration of the dangers posed by artificial intelligence, Mark Read, CEO of WPP, one of the world's largest advertising firms, was recently targeted in an elaborate cyberattack involving AI-driven voice cloning.

This incident underscores the growing sophistication of cybercriminals and the vulnerabilities that even top executives face in a digital world.

 

The Attack: A Sophisticated AI Scheme

Attackers used advanced AI tools to clone CEO Mark Read's voice with remarkable accuracy. By leveraging publicly available recordings of his voice, they created a synthetic version capable of mimicking his speech patterns and tone.

This cloned voice was then used in a scam that involved setting up a fake WhatsApp account using Read’s photo as the profile picture.

The perpetrators arranged a Microsoft Teams meeting with other WPP executives, during which they employed the cloned voice and a deepfake video to impersonate Read. Their goal was to solicit sensitive information and financial transfers.

During the meeting, the scammers relied on text-based chat and off-camera audio to maintain their deception.

They targeted another senior executive within WPP, attempting to exploit trust and familiarity to extract money and personal details. Fortunately, the vigilant employees detected inconsistencies in the communication, thwarting the attack before any damage was done.

The Rise of AI-Enhanced Cyber Threats

 This incident is not an isolated case but part of a broader trend in cybercrime fueled by advancements in artificial intelligence. Tools capable of generating deepfake videos and cloning voices are becoming more accessible, enabling criminals to execute increasingly convincing scams.

Voice cloning technology, in particular, poses unique challenges. With seconds of audio—easily obtainable from social media or public appearances—cybercriminals can create highly realistic replicas of someone’s voice.

This capability has been used in various scams, from impersonating CEOs to fake ransom calls involving family members.

The Implications for Businesses 

The attack on WPP highlights how AI-driven scams are evolving beyond traditional phishing emails into more sophisticated forms like vishing (voice phishing).

These attacks exploit not only technological gaps but also human psychology, making them harder to detect and prevent. For businesses, particularly those reliant on verbal communication for critical transactions, this represents a significant risk.

The financial sector has already experienced losses due to similar scams. In one high-profile case in 2024, a multinational finance company lost $25 million after employees were deceived by deepfake voices during a conference call. These incidents demonstrate that even well-trained professionals can fall victim to such schemes when faced with highly convincing impersonations.

Mark Read’s warning to his employees serves as a reminder of the sophistication of cyberattacks targeting senior leaders. He emphasized that even familiar voices or images should not be trusted without verification, urging caution when handling sensitive information or financial requests.

The incident at WPP, along with others, highlights the need for deepfake detection which guards against evolving threats. Detection of deepfakes makes it less likely that misinformation and impersonation scams can be successful, by pre-emptively identifying and neutralizing these scams.

Latest AI Deepfake articles