AI scams in crypto approach breaking point – OpenAI’s new image model shows why they could get worse

3 hours ago 2



A crypto founder had his laptop compromised when he joined what appeared to be a Microsoft Teams call with Pierre Kaklamanos, a Cardano Foundation contact he had spoken with before.

When “Pierre” reached out about Atrium and sent a Teams invite, nothing looked out of place. On the call, the face and voice matched what he remembered, and two other apparent foundation members were present.

When the call lagged and dropped him, a prompt told him his Teams software was out of date and needed reinstalling through Terminal. He ran the command, then shut the laptop off because the battery was dying, which limited the damage in retrospect.

He describes himself as “quite technically savvy,” which is part of the point that the attack worked because the context felt legitimate.

Social engineers have always relied on familiarity, and executing that at scale once required either a compromised account or weeks of text-based rapport-building.

The video call was the authentication layer, the thing victims learned to trust, and replicating it is now within reach.

Fake update

Microsoft documented campaigns in February and March 2026 in which malicious files masqueraded as workplace apps, such as msteams.exe and zoomworkspace.clientsetup.exe, with phishing lures that mimicked legitimate Teams and Zoom meeting workflows.

In a separate warning, Microsoft described “ClickFix”-style prompts targeting macOS users, instructing them to paste commands into Terminal and targeting browser passwords, crypto wallets, cloud credentials, and developer keys.

The fake Teams update fits both patterns simultaneously.

Google Cloud's Mandiant unit described a crypto-focused intrusion built on the same structure. A compromised Telegram account, a spoofed Zoom meeting, what witnesses described as a deepfake-style executive video, and troubleshooting commands that launched the infection.

Mandiant said it could not independently verify which AI model, if any, generated the video, but confirmed the group used fake meetings and AI tools during social engineering.

On Apr. 24, the real Pierre Kaklamanos posted on X saying his Telegram had been hacked and that someone was impersonating him, along with “a few other people in the industry this week.”

He told followers to avoid clicking links or booking meetings through the account and to verify contact through LinkedIn direct messages.

By then, the founder had already messaged the account suggesting they switch to Google Meet. Whoever controlled Pierre's Telegram account replied that he had gotten busy and asked to reschedule, with the attacker still managing the persona once the call ended.

That exchange turns the incident from an isolated embarrassment into a live campaign signal that the method is active, the account compromise is the entry point, and the relationship history is the weapon.

StageWhat the victim sawWhy it looked legitimateWhat the attacker was likely trying to achieve
Initial outreach“Pierre” reached out about Atrium and suggested a callThe victim had spoken with Pierre before, including on videoReopen an existing trust relationship instead of starting from a cold approach
Meeting setupA Microsoft Teams invite for the next dayTeams is a normal business workflow and the topic was plausibleMove the target into a controlled environment that felt routine
Live callFamiliar face, familiar voice, plus two other apparent Cardano Foundation membersThe social context matched the victim’s memory of prior interactionsLower suspicion and make the call itself feel like verification
Call disruptionLagging, instability, then getting kicked outTechnical glitches are common in video callsCreate frustration and set up the fake “fix” as a normal troubleshooting step
Fake update promptA message saying Teams was out of date and needed reinstalling through TerminalSoftware update prompts are familiar, and the user rarely used TeamsGet the victim to execute a malicious command directly
Command executionThe victim ran the command, then shut down the laptop because the battery was dyingThe workflow still felt like a routine app fix at that momentLaunch the infection chain and gain access to credentials or device data
Post-call follow-upThe victim suggested switching to Google Meet; the attacker said he got busy and asked to rescheduleThe persona continued behaving like a real contact after the failed attemptKeep the relationship alive for another attempt and avoid immediate suspicion

Why generative media changes the threat surface

The founder said he now believes the call may have involved AI-generated or manipulated video. Forensic confirmation of the tools is lacking, and the OpenAI connection here is governed by its own safety documentation.

OpenAI launched its 4o image generation model on Mar. 25, describing it as capable of “precise, accurate, photorealistic outputs,” and released the ChatGPT Images 2.0 System Card on Apr. 21.

The firm stated that the model's “heightened realism” could, absent safeguards, enable more convincing deepfakes of real people, places, or events. One of the leading AI labs has now put on record that its own image model raises the ceiling on what a convincing fake can look like.

The World Economic Forum said in January 2026 that generative AI lowers the barrier to phishing while raising its credibility, through realistic deepfake audio and video that can evade both detection systems and human scrutiny.

INTERPOL declared financial fraud one of the world's most severe and rapidly evolving transnational crimes in March 2026, identifying deepfake videos, audio, and chatbots as tools that make impersonation of trusted people easier to carry out at scale.

CryptoSlate Daily Brief

Daily signals, zero noise.

Market-moving headlines and context delivered every morning in one tight read.

5-minute digest 100k+ readers

Free. No spam. Unsubscribe any time.

Whoops, looks like there was a problem. Please try again.

You’re subscribed. Welcome aboard.

Chainalysis estimated that crypto scams and fraud reached $17 billion in 2025, with impersonation scams up 1,400% year over year and AI-enabled scams generating 4.5 times as much revenue as traditional methods.

AI scams boosting amount stolenChainalysis data shows crypto scams reached $17 billion in 2025, impersonation scams up 1,400%, and AI-enabled scams generating 4.5 times traditional revenue.

Crypto attracts this class of attack because it combines high-value targets, fast settlement rails, and an informal communications culture in which Telegram introductions and ad hoc video calls between founders are routine.

Mandiant documented that the group behind the crypto Zoom intrusion targeted software firms, developers, venture firms, and executives across payments, brokerage, staking, and wallet infrastructure.

Mandiant noted that the victim's data could be used to seed future social engineering, with each compromise generating material for the next.

Two paths forward

Zoom announced on Apr. 17 a partnership to add real-time human verification to meetings, a “Verified Human” badge, and a “Deep Face Waiting Room,” treating participant authenticity as a product problem.

Gartner predicts that by 2027, 50% of enterprises will invest in disinformation-security products or TrustOps strategies, up from less than 5% today.

In the bull case, that buildout reaches critical mass quickly enough that attackers must defeat multiple independent trust layers to complete a conversion, and the economics of impersonation campaigns deteriorate.

In the bear case, the timeline compresses before defenses do. Gartner warned that AI agents may halve the time required to exploit account takeovers by 2027, narrowing the window for human hesitation or security team intervention.

Deloitte estimated that generative AI-enabled fraud losses in the US alone could climb from roughly $12 billion in 2023 to $40 billion by 2027.

ScenarioWhat changesWhat stays vulnerableImplication for crypto firms
Bull caseVerification tools spread quickly: human-verification badges, liveness checks, stronger internal trust rails, and more formal approval workflowsInformal founder-to-founder chats, legacy messaging habits, and ad hoc scheduling still create openingsAttackers face more friction and lower conversion rates because they must defeat several trust layers instead of one
Bear caseAI-generated impersonation improves faster than defenses are adopted; fake meetings and fake troubleshooting become standard playbooksPublic-facing executives, Telegram-based outreach, video-first verification habits, and staff under time pressureRelationship hijacking becomes routine, and each compromise creates material for the next scam
What success looks likeSensitive requests get verified across separate channels, with known numbers, shared passphrases, hardware keys, or pre-agreed internal systemsSocial pressure, urgency, and trust in familiar faces and voices cannot be fully removedFirms reduce the chance that one spoofed call can lead directly to compromise
What failure looks likeTeams rely on the call itself as proof of identity, even as deepfake and impersonation tools improveVideo remains persuasive even when it is no longer reliable as authenticationCrypto organizations become easier to target because executives are both high-value victims and reusable lure assets

Every public-facing crypto executive becomes both a target and a lure asset, a source of voice recordings, video clips, and relationship graphs that attackers can deploy against the next victim.

Zoom is building liveness checks into meetings, Microsoft is documenting attack chains that impersonate its own software, and the FBI has warned that malicious actors are already using AI-generated voice and text to impersonate trusted contacts, advising against assuming a message is authentic because it appears to come from a known person.

Verification now requires independent rails, such as a known phone number, a hardware key, a shared passphrase established before any meeting, or a pre-agreed internal channel that no attacker has accessed.

Read Entire Article