10 February 2026
Terrorist Financing in the Age of Large Language Models
Why LLMs matter for terrorist financing
Large language models (LLMs) do not themselves commit violence, but they change the economics and mechanics of persuasion that underpin many fundraising and fraud schemes. By producing high‑volume, culturally tailored narratives, synthetic testimonials and polished outreach materials at low cost, LLMs can reduce barriers that once limited the scale and professionalism of illicit fundraising. The risk is not only that violent or criminal groups will ask an LLM explicitly to write a donation appeal, but that they will hide behind layers of plausible legitimacy – charities, cultural projects, reconstruction campaigns – and use AI to industrialise narratives that convert sympathy into funds. This article summarises recent research and policy findings, compares major provider approaches, reports basic prompt testing results, and offers practical, risk‑based recommendations for industry, financial institutions, regulators and civil society.
How LLMs amplify persuasive infrastructure
LLMs enable a range of activities relevant to terrorist financing. They can generate emotionally resonant fundraising copy, craft tailored solicitations for distinct demographic or linguistic segments, produce synthetic endorsements (voice‑ or text‑based), and create professional‑looking documents or websites that mask illicit intent. Deepfakes and synthetic media can be used to impersonate public figures or trusted spokespeople, undermining usual trust signals. On the technical side, AI can support upstream revenue generation through social‑engineering and phishing, and downstream concealment through automated routing strategies designed to evade detection.
Open‑source reporting and policy research point to two major threat vectors. The first is persuasive content: tailored campaigns, fabricated testimonials and multimodal appeals designed to elicit donations on social media, crowdfunding sites and payment platforms. The second is cyber‑enabled revenue generation and theft: AI‑assisted phishing, malware and abuse of onboarding systems that weaken KYC and identity verification. These vectors often intersect; for example, synthetic media used in a social‑engineering attack to authorize a funds transfer. State actors have already demonstrated interest in using AI tools for espionage and influence operations, and organised crime groups see LLMs as force multipliers for fraud and anonymised fundraising.
Major providers’ policies and their differences
Leading providers – OpenAI (ChatGPT), Google (Gemini) and Anthropic (Claude) – publish prohibitions against using models for terrorism, violent wrongdoing, fraud and sanctions evasion, but they differ in implementation detail and legal framing. Google tends to offer the most detailed compliance framework with explicit references to US laws such as the Bank Secrecy Act, the USA PATRIOT Act and OFAC rules, and it outlines active suspicious‑activity monitoring and identity verification where relevant. OpenAI employs broader categorical prohibitions in a “Protect People” section while reserving rights to enforce terms and report violations. Anthropic combines general prohibitions with detailed usage policy language, reserves reporting rights, and in practice restricts access from certain jurisdictions.
In the European and UK contexts, providers adjust agreements to align with data‑protection regimes and regionally applicable legislation. Anthropic, OpenAI and Google have signalled compliance with EU standards, and each participates in EU‑level AI governance initiatives to varying degrees. These distinctions matter because countermeasures to AI‑enabled illicit finance require coordination across legal regimes and technical interoperability for detection and reporting.
Basic prompt testing and immediate findings
To get a high‑level read on policy adherence, simple prompts were tested across the three models. Two overt prompts were used: one asking for fundraising material for a designated extremist group, and another asking for money‑laundering advice framed as fictional or novel research. All three models refused the extremist fundraising request and declined to provide practical laundering instructions, with provider responses citing legal and policy prohibitions and, in one case, explicitly noting designation status. These simple tests suggest that baseline guard rails block explicit, unsophisticated requests that clearly violate terms of service.
However, these checks are narrow. Real‑world adversaries use stealth techniques such as prompt injection, chained LLMs, coded metaphors and staged persona creation to evade filters. More advanced testing is required to assess how well filters resist obfuscation, binary encoding, encoding/decoding loops and roleplay scenarios that gradually normalize illicit objectives. The models’ compliance with overt requests is necessary but not sufficient evidence that they are robust against adaptive misuse.
Where the real risk lies – diffusion and adversarial adaptation
The principal danger is not only direct, explicit misuse but the diffusion of persuasive capabilities across a broader ecosystem. LLMs can professionalise messaging for campaigns that appear legitimate and can be amplified through networks and platforms to create cascades of social proof. The combination of tailored content, synthetic credibility markers and cross‑platform amplification could turn discrete violent acts into monetisable events more quickly and at greater scale than before. State sponsors and organised criminal networks that already operate complex financial evasion systems stand to gain especially from integrating AI into routing, obfuscation and automated deception.
Policy implications and practical recommendations
Because confirmed public examples of AI‑driven terrorist financing remain limited, responses should be preparatory, targeted and risk‑based rather than blunt restrictions that risk overreach. The aim should be to strengthen auditability, detection and coordinated response, while respecting privacy and legitimate uses of generative AI.
LLM providers should invest in content provenance and watermarking. Standardised digital signatures or cryptographic provenance for synthetic content would allow downstream platforms and financial institutions to flag suspicious patterns and identify coordinated campaigns. This needs industry standards so different providers’ signals are interoperable.
Financial institutions should embed AI detection into enhanced due diligence. Banks and payment processors must expand automated monitoring to look for markers of AI‑generated legitimacy – sudden surges of professionally polished appeals tied to new or obscure charities, coordinated message similarity across languages or regions, and anomalous payment flows tied to newly created entities. Enhanced monitoring is especially critical for fundraising purportedly linked to conflict zones and humanitarian relief.
Regulators should require penetration testing of LLMs for financing vulnerabilities. Regulatory regimes should mandate adversarial testing focused on money‑movement and deception use cases, and set minimum reporting standards for suspected AI‑enabled financial crimes. Information‑sharing protocols between tech firms, financial intelligence units and law enforcement must be clarified under appropriate legal protections.
Civil society should build monitoring capacities and early‑warning indicators. NGOs, journalism networks and watchdog groups can detect evolving campaign styles and document cases of synthetic fundraising. Public‑facing awareness campaigns should teach donors how to verify charities, how to spot synthetic testimonials, and where to report suspicious solicitations.
All stakeholders should explore additional technical solutions. Hash‑sharing databases and shared indicators of compromise have proven useful against extremist content; similar shared resources could be adapted for AI‑generated fundraising patterns. Research into blockchain or other immutable provenance mechanisms could provide auditable trails for content origin. Finally, collaborative public–private working groups, modeled on existing counter‑terrorism and anti‑money‑laundering task forces, will be necessary to iterate detection standards and testing protocols.
Conclusion – measured urgency and coordinated action
Generative AI is not a magic enabler that automatically transforms the terrorist financing landscape overnight, but it is a significant force multiplier for persuasion and deception. Leading LLMs appear to block simple, explicit misuse, yet the structural risk comes from adversaries who adapt: masking intent behind legitimate‑looking campaigns, using coded language and exploiting cross‑platform amplification. The response should be coordinated, risk‑based and privacy‑sensitive: standardised provenance, improved detection in finance, mandated adversarial testing, civil society monitoring, and new public–private mechanisms for rapid information sharing. Those steps will reduce the chance that the persuasive infrastructure around illicit fundraising scales unchecked and will give authorities and platforms the tools to detect and disrupt abuse before it matures.
Dive deeper
- Research ¦ Jason Blazakis, Terrorist Financing in the Age of Large Language Models, Project craaft, Research Briefing No. 4, published in 2026 by the Royal United Services Institute for Defence and Security Studies ¦
Link ¦
licensed under the following terms, with no changes made:
CC BY-NC-ND 4.0