FT Film (2026) ¦ The Rise of Deepfakes and How To Stop Them

FT Film (2026) ¦ The Rise of Deepfakes and How To Stop Them

Deepfakes and Financial Crime – How Synthetic Media Is Fueling Scams and What Can Be Done About It

The rise of consumer-grade AI has transformed deepfakes from a technical curiosity into a mainstream criminal tool. What began as academic experiments in face-swapping and voice synthesis is now a commercialised capability available to anyone with a smartphone or a laptop. That accessibility is changing the threat landscape for financial crime: scammers can impersonate trusted figures, sell fraudulent investment schemes to millions, and exploit gaps in verification systems to extract real money from victims. A recent investigation lays out how the technology works, how it has been abused, and what detection and policy responses are emerging – with clear lessons for financial institutions, regulators, and compliance teams.

How easily deepfakes can be made and weaponised

Advances in generative AI mean that creating convincing image and audio fakes no longer requires specialised skills. Off‑the‑shelf apps and free software can take a few photos or an existing clip and produce a face-swap or a voice clone in minutes. The investigation demonstrates this with a simple workflow: upload a headshot and a target video, run open-source tools or consumer apps, and generate a new clip in which one person’s face and voice appear to belong to another. Even when the accent or micro-details are not perfect, the result can be highly persuasive to casual viewers.

The fraud potential is enormous because of two reinforcing factors.

  1. Distribution platforms like social channels allow crafted deepfake ads or messages to reach millions quickly.
  2. Social engineering capitalises on familiarity and authority: if a clip appears to show a respected commentator recommending an investment, viewers are more likely to act impulsively.

Low-tech predecessors and the persistence of impersonation scams

Deepfakes are the latest stage in a longer history of impersonation scams. The investigation recounts the 2015 “rubber mask” scheme in which a fraudster used a hyperrealistic mask to impersonate a minister and convince victims to transfer large sums. The key insight is that realism, whether delivered through silicone and practical effects or through pixels and neural networks, is what enables deception. High-quality masks, proppings, and edited video once achieved similar ends; now AI makes synthesis cheaper and faster, while practical effects still have advantages in motion and lighting for certain uses. Criminals will use whatever combination of tools produces the most credible result.

Bastian Schwind-Wagner
Bastian Schwind-Wagner

"Deepfakes are a growing risk for financial crime because they make impersonation cheap and scalable, allowing fraudsters to reach large audiences with convincing but fraudulent messages. Financial institutions must assume media can be manipulated and enforce stronger verification and transaction controls to prevent losses.

Effective defense requires combining technical detection, cross-platform provenance, and regulatory clarity while training staff to treat audiovisual evidence with skepticism. Collaboration between banks, tech platforms, and regulators will be essential to limit harm and keep trust in financial communications."

Detection: why it’s possible – and still challenging

Detection companies illustrate both the promise and the limits of technical defence. One approach is to ask two questions in live interactions: is this a human, and is this the right human? Detection relies on finding small anomalies in audio and video across spatial and temporal dimensions – for example, micro‑artifacts in voice spectrums, mismatches in facial motion patterns, or temporal inconsistencies across frames. Importantly, there is a cost asymmetry: according to the investigation, it can be many orders of magnitude cheaper to detect a deepfake than to fabricate a fully flawless one, because a detector only needs to catch a single mistake.

Yet the arms race is real. Synthetic media developers are rapidly closing known detection gaps by removing early telltales – no blinking, fewer artefacts, better lip sync – and denoising tools make low-quality source material usable. Detection must therefore evolve constantly, analysing subtler multimodal signals and leveraging access to high-fidelity live data streams where possible (for instance, through partnerships with conferencing platforms). Still, detection is not a panacea: many consumer viewers won’t spot nuanced anomalies, and detection systems must be integrated into platforms and workflows to matter in practice.

Supply, convenience, and the democratization of abuse

Several technical and social dynamics make abuse likely to continue growing.

  1. Multiple different models and techniques coexist – diffusion models for images, autoencoders for face swaps, specialised lip-sync tools, voice cloning systems, and full-body avatar generators – so a single defensive strategy is insufficient.
  2. The barriers to entry are low: a smartphone can produce audio deepfakes and, with less real-time constraint, can even contribute to usable video deepfakes.
  3. The widespread availability of personal imagery online means that the raw materials for cloning are often publicly accessible; it takes only a few images or audio samples to produce content that is “good enough” to deceive.

These factors have particular implications for young people and other vulnerable groups, who are already disproportionally affected by non-consensual image abuse. For financial crime, the risk concentrates where trust, authority, and rapid online distribution intersect – such as fake endorsements, fraudulent investment pitches, and impersonation of corporate officers or advisors.

Policy, platform measures, and provenance systems

Responses are emerging on several fronts but are uneven across regions. Technical initiatives include digital watermarking, metadata provenance frameworks, and industry coalitions working on content authenticity standards. The Coalition for Content Provenance and Authenticity (C2PA) and invisible watermarking schemes aim to attach verifiable signals to original content so that downstream platforms and users can assess whether a clip is generated or altered. However, these approaches face practical challenges: signals can be stripped or lost as content is reposted across services, and there is no global standard for how such provenance should be implemented or interpreted.

Regulation is similarly fragmented. In the United States, federal moves toward mandatory transparency have stalled, leaving patchwork approaches such as California’s rules to push the trend. Proposed laws like the US NO FAKES Act (or the Nurture Originals, Foster Art, and Keep Entertainment Safe Act) of 2025 confront difficult issues around likeness rights and speech, and cross-border enforcement remains complex. In the EU, the AI Act and related provisions aim to create stronger obligations around transparency and labeling for synthetic content. The investigation highlights that technology companies’ policies and labelling decisions can be controversial and may have unintended consequences for legitimate creative uses.

Practical implications for financial crime prevention

For banks, brokerages, asset managers, and compliance teams, the deepfake threat requires both immediate operational changes and longer-term strategic investments. Operationally, organisations should assume that any combination of audio, video, and image evidence can be forged and should avoid relying on these materials alone for authorisations of transfers, account access, or investment advice validation. Multi-factor authentication, step-up verification for high-value transactions, human review of anomalous requests, and transaction monitoring that looks for atypical transfer patterns remain essential.

Strategically, firms should invest in detection partnerships and integrate veracity checks into customer-facing and back-office channels. Relationships with vendors that can analyse live audio-video streams for human vs. synthetic signals, and collaboration with platforms to flag suspicious campaigns or remove fraudulent adverts, will mitigate exposure. Training frontline staff to treat media-based claims with scepticism, and to follow escalation paths when a purported public figure or executive appears on unsolicited video or voice messages, is critical.

Broader societal steps and the need for shared infrastructure

Technical solutions and corporate policies alone are insufficient. The investigation underscores the need for coordinated public policy, common standards for provenance and watermarking, and better cross-platform mechanisms to trace the origin and movement of synthetic content. Education and clear user-facing labels can help, but they must be coupled with resilient technical signals that travel with content. Regulators, technology providers, media organisations, and financial institutions must collaborate to build interoperable systems that make manipulation detectable at scale.

Conclusion: adapt, integrate, and insist on provenance

Deepfakes add a potent weapon to the fraudster’s toolkit by making impersonation cheaper, faster, and more convincing. Financial crime teams should treat synthetic media as a material threat vector and adapt controls accordingly: do not accept media proof at face value, integrate detection technologies, implement stricter verification for high-risk approvals, and participate in wider efforts to embed provenance into content ecosystems. The arms race between synthetic media creation and detection will continue, but combining technical detection, policy measures, platform practices, and institutional vigilance can meaningfully reduce the harm and financial loss that deepfakes enable.

Documentary copyright holder(s): FT Film
Did you find any mistakes? Would you like to provide feedback? If so, please contact us!
Bastian Schwind-Wagner
Bastian Schwind-Wagner Bastian is a recognized expert in anti-money laundering (AML), countering the financing of terrorism (CFT), compliance, data protection, risk management, and whistleblowing. He has worked for fund management companies for more than 24 years, where he has held senior positions in these areas.