FATF ¦ Horizon Scan AI and Deepfakes

FATF ¦ Horizon Scan AI and Deepfakes

Headline AI and Deepfakes Are Rewriting the Playbook for Financial Crime – What Compliance Teams Must Do Now

Artificial intelligence and deepfake technologies are no longer futurist curiosities; they are active tools in the hands of criminals and a growing threat to anti-money laundering, counter‑terrorist financing and counter‑proliferation efforts. The FATF’s 2025 horizon scan makes clear that these capabilities change both how illicit actors operate and how obliged entities and authorities must respond. The challenge is twofold: criminals use AI to scale and hide illicit activity, while legitimate institutions must rapidly adopt equally sophisticated measures to detect and deter misuse without sacrificing access or privacy.

Deepfakes and the erosion of identity assurance

Deepfakes – convincing synthetic audio, image and video – have moved from rare, high‑skill productions to widely available, off‑the‑shelf tools that anyone with a smartphone can use. That accessibility creates an immediate problem for Customer Due Diligence (CDD). Where Recommendation 10 and 22 require identification using reliable documents and independent data, deepfakes allow impersonation of individuals and manipulation of biometric checks. Criminals can pass video KYC, spoof voices in call‑centre interactions and fabricate documentary evidence to support fraudulent firms or sham transactions. As the FATF highlights, the growing reliance on facial recognition and video‑based onboarding amplifies these risks, especially when detection capabilities lag behind fraudsters’ techniques.

How AI scales both low‑skill and professional operations

The threat is not restricted to lone opportunists. Europol’s concept of dual‑use highlights two vectors: AI lowers the technical bar, letting low‑skill offenders run convincing phishing, romance and investment scams, while professionalised cybercriminals use AI to automate and optimize complex laundering and fraud campaigns. Expert actors can emulate device/browser behaviour, synthesize fingerprints and reproduce login patterns, enabling them to bypass multi‑factor checks and stay ahead of static rules. At the same time, criminal networks exploit AI to craft realistic transactional patterns and synthetic identities that blend into normal financial flows, making detection an increasing technical challenge.

Real cases and key operational implications

Recent cases described in the FATF scan demonstrate real‑world impacts: a multinational advisory firm deceived into a USD 25 million transfer during a deepfake video call, and fraud rings using deepfakes and synthetic IDs to onboard cryptocurrency accounts and route proceeds. Another case used AI‑generated broadcast news to pump fraudulent securities offerings and then funneled returns through virtual assets and unhosted wallets. These incidents show common operational features: criminal use of synthetic media at the point of identity verification, coordinated layering across fiat and virtual assets, and deliberate cross‑border infrastructure choices to exploit legal and enforcement gaps.

Bastian Schwind-Wagner
Bastian Schwind-Wagner

"AI‑enabled deepfakes and generative models are rapidly increasing the scale and sophistication of financial crime by enabling convincing identity fraud, automated layering schemes, and realistic synthetic documentation. Financial institutions and authorities must upgrade verification processes and monitoring systems to detect these evolving threats while preserving legitimate access to services.

Effective response requires coordinated public–private action: sharing indicators, investing in specialised expertise, and updating legal frameworks to address technology‑enabled offences. Combining AI‑driven detection with human analysis and international cooperation will be essential to disrupt and prosecute sophisticated, adaptive criminal networks."

Detection approaches and defensive adjustments

Detecting synthetic media requires more than signature checks. Sophisticated defence blends technical, human and process controls. Financial institutions should integrate multi‑layer verification: hardware‑based or hybrid liveness checks, advanced biometric modalities combined with behavioural signals, and AI‑driven content validation tools that look for subtle artifacts and inconsistencies. Transaction monitoring must evolve: graph analytics, real‑time anomaly detection, adaptive customer profiling and consortium intelligence sharing can reveal laundering rings whose transaction footprints were created to mimic normal activity. Importantly, human expertise remains essential – trained investigators and specialist witnesses identify contextual anomalies automated systems miss.

AI as both threat and tool

The scan emphasizes that AI is dual purpose. While it empowers criminals, the same techniques can strengthen AML controls. Generative and discriminative models can enhance document verification, detect tampering in photos or videos, and surface suspicious behavioural patterns. Some institutions are already combining big data photo‑forensics with transaction monitoring to reveal fictitious business activity. To be effective, these AI defenses require continuous retraining, explainability practices to preserve evidentiary value, and integration with traditional investigative methods such as forensic accounting and blockchain analytics.

Emerging scenarios to watch

The horizon scan outlines plausible, higher‑impact scenarios that demand attention now. One involves AI generating convincing documentation and invoices to support complex layering. Another describes agent‑based AI systems operating near‑autonomously to execute micro‑transactions across hundreds of mule accounts, synchronizing activity to minimize risk. A further scenario imagines professional sanctions evasion: an AI advisor that synthesizes legal, corporate and jurisdictional intelligence to design routes for moving funds and goods around controls. Each scenario shows how AI can both automate scale and adaptivity – precisely the traits that make detection and enforcement harder.

Challenges for investigators and prosecutors

AI‑enabled crime complicates investigations. Agents can probe regulatory materials to identify weak jurisdictions, generate synthetic trade flows and produce vast quantities of plausible but false evidence that strains analysis and court processes. The “black box” nature of many AI systems poses additional evidentiary problems: when outputs lack transparent provenance, investigators and judges may find attribution and intent harder to establish. Building pools of qualified experts, updating criminal codes to recognise technology‑enabled offences, and training prosecutors in relevant technical concepts are urgent priorities.

Regulatory and cooperative responses

There is no single global AI regulatory framework yet, though jurisdictions are moving at different speeds. FATF Recommendation 15’s risk‑based approach to new technologies provides a useful foundation: obliged entities must assess and manage risks associated with AI tools in products and services. The FATF scan underlines the need for cross‑sector cooperation – public‑private partnerships, academic collaboration and industry standards – to keep pace. Information sharing, joint red‑teaming of models, and standardised indicators of synthetic media will help make detection more resilient and consistent across borders.

Immediate practical steps for financial institutions and regulators

Financial institutions must reassess CDD and onboarding controls with AI threats in mind. That means combining stronger liveness and biometric checks with behavioural profiling and improved document anti‑forgery tools. Transaction monitoring should incorporate network analytics and adaptive models designed to detect adversarially created patterns. Regulators and FIUs should prioritise capability building: invest in specialist cybercrime units, develop standardised deepfake detection toolkits and support training for investigators and prosecutors. Equally important is privacy‑respecting data governance and ensuring that measures do not unduly exclude legitimate customers.

Conclusion

AI and deepfakes are reshaping the landscape of financial crime – enabling fraud, sophisticated laundering, sanctions evasion and identity fabrication at scale. The FATF’s horizon scan is a clear call to action: stakeholders must move beyond ad hoc responses and build layered, adaptive defences that combine AI‑driven detection with human expertise, legal reform and international cooperation. The stakes are high, but so are the tools available to respond. The imperative now is to invest strategically in technology, people and partnerships so the financial system remains resilient as these threats evolve.

The information in this article is of a general nature and is provided for informational purposes only. If you need legal advice for your individual situation, you should seek the advice of a qualified lawyer.
Did you find any mistakes? Would you like to provide feedback? If so, please contact us!
Dive deeper
  • FATF ¦ Horizon Scan AI and Deepfakes ¦ Link
Bastian Schwind-Wagner
Bastian Schwind-Wagner Bastian is a recognized expert in anti-money laundering (AML), countering the financing of terrorism (CFT), compliance, data protection, risk management, and whistleblowing. He has worked for fund management companies for more than 24 years, where he has held senior positions in these areas.