What once required a Hollywood budget can now be cobbled together in someone’s basement in an afternoon: a convincingly lifelike video of a public figure saying things they never said, doing things they never did. Welcome to the deepfake economy—a new frontier for financial fraud. No longer the exclusive headache of celebrities and politicians, these synthetic personas are now crashing the boardrooms of Wall Street and quietly phishing retail investors with terrifying accuracy.
The case that rattled the industry recently? A finance employee in Hong Kong wired millions after attending a video call populated entirely by digital ghosts—deepfakes impersonating real executives. It sounds absurd until you see the footage. These weren’t cartoonish simulations—they blinked, nodded, exchanged pleasantries. And then stole millions.

Instagram | fortune500 | A deepfake video falsely depicted former Goldman Sachs strategist Cohen promoting a WhatsApp stock scheme.
They’ve Got the Look, the Voice, the Backstory—Just Not the Ethics
One recent scam featured eerily accurate replicas of Goldman Sachs veterans Abby Joseph Cohen and David Kostin pushing stock tips on Instagram. On the surface, it resembled your standard expert Q&A. But look closer—or listen longer—and the cracks appeared: mismatched intonation, a curious stiffness in facial movement, a pitch that was just a hair too robotic. Still, the fakes were polished enough to lure unsuspecting investors into a WhatsApp group with promises of “retirement-ready” returns within five years.
For context, Cohen retired from Goldman in 2021 after three decades of economic forecasting. In real life, she’s never promoted “undervalued tech stock secrets” via encrypted chat. But in the scam, her doppelgänger did exactly that.
What saved the day wasn’t tech. It was the institution’s fast response—Goldman Sachs flagged the fake and publicly discredited the campaign. No high-tech detection tools. Just a solid internal comms team, quick corporate reflexes, and a good eye.
Why Finance Is Especially Vulnerable
In finance, trust isn’t just an asset—it’s the currency. And deepfakes counterfeit that currency with alarming ease. Unlike other sectors, finance thrives on the perception of authority. If a familiar name appears on your feed delivering confident, specific advice, the average investor isn’t thinking about GANs (generative adversarial networks). They’re thinking about missed opportunities.
Deepfakes exploit the very qualities that make financial professionals effective: clear communication, personal branding, and consistency. Ironically, the more trustworthy someone appears online, the more likely their digital likeness becomes a tool for fraud.

Freepik | Stay vigilant against deepfakes by knowing the threats and detection tools.
What the Industry’s Doing (and Where It’s Falling Short)
The arms race is already on. CFOs are now on speed dial with CISOs, hashing out contingency plans and embedding biometric markers and watermarks in internal communications. Some firms are using real-time liveness detection in video calls, flagging inconsistencies like delayed blinking or unnatural lighting.
But here’s where most fall short: training. Many employees still don’t know what a deepfake looks like. They’re told to “stay vigilant,” but without examples or drills, that advice becomes meaningless. It’s like teaching someone to recognize counterfeit money by telling them, “you’ll know it when you see it.”
How to Not Get Duped (Even When the Deepfake Looks Legit)
Let’s be honest—most people won’t stop to analyze eye twitch frequencies or lip sync latency. So here are real-world tactics that work:
- Don’t trust a familiar face just because it looks familiar. If the context feels off—odd tone, unlikely platform, urgency without explanation—pause.
- Call, don’t click. Verification still works the old-fashioned way. If someone you “know” is offering financial advice in a way they never have before, pick up the phone.
- Limit exposure on public platforms. The more data scammers have—voice clips, speaking patterns, facial angles—the better their models get. Keep high-quality video content of yourself private unless necessary.
- Train for the weird stuff. Regular cybersecurity training should now include spotting deepfakes. Not theoretical overviews—actual case studies.
This Isn’t Just a Tech Problem. It’s a Trust Crisis
Deepfakes aren’t merely tools for deception; they’re weapons of credibility theft. And in finance, credibility is often the difference between a routine wire transfer and a seven-figure scam. As AI gets sharper, skepticism must keep pace.
That doesn’t mean retreating from digital platforms or fearing every message with a familiar face. But it does mean a permanent shift in how financial professionals verify what they see—and who they think they’re seeing.