AI deepfake perfectly clones ex-Binance CEO CZ’s voice, reigniting fraud fears Liam 'Akiba' Wright · 1 min ago · 2 min read
Zhao highlights the growing threat of AI-generated deceptions, warning of impersonation risks amid rising deepfake scams.
Cover art/illustration via CryptoSlate. Image includes combined content which may include AI-generated content.
Changpeng “CZ” Zhao, former CEO of Binance, stated Thursday that he had encountered an AI-generated video that replicated his voice so accurately that he could not distinguish it from a real recording.
The post, shared via X, featured an AI voice-over of Zhao speaking Mandarin, matching his facial movements with a precision he described as “scary.”
The video shows Zhao speaking in Chinese over a series of video clips, AI-generated content, and photos. Its fidelity renewed concerns around the unauthorized use of AI to impersonate public figures.
The incident adds to a growing body of cases where digital likenesses of crypto executives have been cloned using generative tools, sometimes for fraudulent purposes.
Zhao, who remains a prominent figure in the crypto industry despite stepping down as Binance CEO in 2023, has previously issued warnings about impersonation attempts involving deepfakes. In October 2024, he advised users not to trust any video footage requesting crypto transfers, acknowledging the circulation of altered content bearing his likeness.
Deepfakes and crypto: Increasing operational risk
The latest video adds a new dimension to impersonation tactics that have moved beyond static images and text. In 2023, Binance’s then-Chief Communications Officer, Patrick Hillmann, disclosed that scammers used a video simulation of him to conduct meetings with project representatives via Zoom.
The synthetic footage was stitched together using years of public interviews and online appearances, enabling actors to schedule live calls with targets under the pretense of official exchange engagement.
Zhao’s experience suggests voice replication has reached a comparable level of realism, even to the person being mimicked, raising fraud risks beyond social media impersonation.
In February, Arup’s Hong Kong office staff were deceived into transferring approximately $25 million during a Microsoft Teams meeting, believing they were speaking with their UK-based finance director. According to the South China Morning Post, every participant on the call was an AI-generated simulation.
Voice-cloning capabilities now require minimal input
Tools once dependent on extensive voice samples now operate with only brief recordings. Many consumer-level systems, such as ElevenLabs, require less than 60 seconds of audio to generate a convincing clone. The financial institution reported in January that over one-quarter of UK adults believe they encountered scams involving cloned voices within the prior 12 months.
These tools are increasingly available at low cost. According to threat intelligence briefings from CyFlare, turnkey access to voice-to-voice cloning APIs can be purchased for as little as $5 on darknet marketplaces. While commercial models offer watermarking and opt-in requirements, open-source and black-market alternatives rarely adhere to such standards.
The European Union’s Artificial Intelligence Act, formally adopted in March 2024, includes mandates that deepfake content must be clearly labeled when deployed in public settings. However, the law’s enforcement window remains distant, with full compliance not expected until 2026.
Without active regulatory barriers, hardware manufacturers are beginning to integrate detection capabilities directly into consumer devices.
Mobile World Congress 2025 in Barcelona featured several demonstrations of on-device tools designed to detect audio or visual manipulation in real-time. While not yet commercially available, these implementations aim to reduce user dependence on external verification services.