
23 June 2025
RUSI (2025) ¦ Virtual Threats: Terrorist Financing via Online Gaming
Virtual Threats: Terrorist Financing via Online Gaming
The growth of online gaming has created a complex digital economy where virtual goods, in-game currencies and large, social gaming communities intersect with real-world money and communications tools. In a RUSI webinar based on a new report by Gonzalo Saiz, the conversation focused on whether and how that digital ecosystem can be abused to raise, move or launder funds linked to terrorism and violent extremism. The discussion highlighted current knowledge gaps, practical investigative challenges, and the need for targeted research and policy responses.
The gaming economy and how value moves
Modern games routinely include microtransactions: purchases of cosmetic items (skins), power-ups, expansions and loot boxes, often paid for with fiat via credit or prepaid cards and converted to in-game currencies. Those virtual items acquire real-world value because third-party marketplaces and buyers are willing to pay fiat or cryptocurrency to acquire them. That convertibility — either directly through exchanges that support dual-direction flows or indirectly via third-party markets and informal trades — creates routes for funds to leave the gaming environment and enter traditional financial channels.
Criminal exploitation typically resembles classic money‑laundering typologies more than bespoke “terrorist-only” methods. Stolen card or prepaid funds can be used to buy game assets, which are then sold on third‑party platforms for fiat or crypto. Communications platforms popular among gamers are crucial enablers because they host the negotiation, coordination and payment instructions, including moves into encrypted or peer‑to‑peer channels. Where crypto is used as the off‑ramp, blockchain tracing helps but identifying the individuals behind transactions remains hard.
Scale, evidence and the limits of current research
Several high‑profile examples and investigative reports have flagged misuse of gaming markets. Valve’s admission about extensive illicit activity in Counter‑Strike microtransactions is a notable case: the company alleged that a large proportion of transactions were tied to criminality, prompting platform-level changes. But accurate measurement is difficult. Publicly available sales data, self‑reported figures from actors, and small investigative samples give only rough approximations. Many transactions occur off‑platform or via private communications, which reduces observability. As a result, current research and regulatory focus may understate actual activity — or conversely, may amplify isolated cases beyond their true scale. The careful takeaway is that vulnerabilities exist and are fertile ground for abuse, even where conclusive evidence about scale and prevalence is lacking.
Terrorist versus extremist financing: legal and practical tensions
A major theme of the webinar was definitional and operational tension between legal counter‑terrorism finance tools and a broader set of extremist harms. Traditional counter‑terrorist financing (CTF) measures and many regulatory obligations apply only to designated terrorist groups or individuals. Yet much of the concerning activity in gaming relates to extremists who fund propaganda, content production, live events, paid speakers, legal defense and personal subsistence — activities that support extremist ecosystems without necessarily meeting a legal terrorism threshold.
This gray zone presents challenges for obliged entities (banks, payment providers) and financial intelligence units. Should suspicious activity reports be filed when transfers fund extremist figures but lack clear links to terrorism? Over‑reporting can overwhelm analysts; under‑reporting misses emergent threats. Different jurisdictions also diverge in designations (for example, some countries list groups like the Proud Boys as terrorist entities while many others do not), complicating cross‑border detection and reporting. The panel argued for improved typologies and targeted guidance so reporting entities can better distinguish between AML and genuine CTF risks.
Why gaming might attract illicit financiers
Gaming platforms offer features attractive to those seeking anonymity and obfuscation: easy account creation without KYC, high-volume microtransactions that can mask small value transfers, and social channels that shift communication to encrypted or private streams. In some cases, gaming facilitates monetization for extremist creators who lose traditional incomes due to public repudiation — live streams, donations and merchandising can sustain content producers and broaden their networks. While many current abuses resemble money‑laundering more than explicit terrorist finance, the pathway from supporting extremist infrastructure to enabling violence is plausible, and modest contributions can feed wider networks.
Practical investigative and evidentiary obstacles
Law enforcement and investigators face three core obstacles.
- First, data availability: game platform operators and adjacent communication services are often not regulated as obliged entities, so they do not routinely produce financial or transaction reports.
- Second, communications architecture: voice and video chats, ephemeral messages and encrypted private channels leave limited retrievable trails.
- Third, jurisdictional and technical barriers: cross‑border platforms, VPNs and crypto introduce complex attribution hurdles.
These barriers help explain why prosecutions specifically charging terrorism financing tied to gaming remain rare. Ongoing investigations exist, but converting investigative leads into prosecution‑grade evidence requires access to platform data, cooperation from industry, and forensic ability to link donations, in‑game trades or crypto flows to named actors.
Industry engagement and regulatory gaps
Platform responses have tended to focus on content moderation — removing extremist propaganda or deplatforming accounts — rather than financial monitoring. Some game companies have taken proactive steps (for instance, voice moderation using automated tools to enforce community rules), and platforms such as Discord and Twitch report enforcement statistics for content violations. However, financial abuse remains outside the routine concerns of many gaming firms, in part because regulators and the AML/CTF frameworks have not yet consistently treated gaming economies as financial ecosystems. EU AML initiatives expanded regulation for some virtual currency exchange services but generally excluded purely in‑game currencies, leaving regulatory gaps that crafty actors can exploit.
Who should lead and what next steps are needed
Addressing the problem will require coordinated, multi‑stakeholder action. The Financial Action Task Force (FATF) could help by producing sectoral risk assessments or typology reports focused on gaming and adjacent social platforms, clarifying how CTF obligations and guidance should apply. Industry coalitions — such as Tech Against Terrorism or the Global Internet Forum to Counter Terrorism — can extend work on content harms to include financial abuse, share typologies, and promote voluntary standards. NATO and like‑minded state coalitions can fund research and intelligence sharing where multilateral mechanisms are stalled.
On the ground, practical steps include targeted outreach to gaming companies and streaming platforms, development of robust typologies that identify the red flags most relevant to investigator workflows, and improved mechanisms for platform‑to‑law‑enforcement cooperation that respect legal processes while enabling timely evidence preservation.
Research and investigative priorities
Key research priorities identified in the webinar include:
- interviewing investigators with operational experience to extract the initial triggers and evidence patterns that led to successful inquiries;
- sectoral risk assessments mapping which games or marketplaces offer dual‑direction convertibility and thus present higher risk; and
- typology development that goes beyond demographic profiling to focus on transactional, narrative and communication indicators (for example, patterns of donations, atypical resale activity, trading of high‑value skins for small admin fees, or repeated use of specific off‑ramps).
Investigative guidance should prioritize “what to look for” when gaming elements intersect with alleged extremist networks — practical triggers that can prompt mutual legal assistance requests or platform preservation orders.
Conclusion
Online gaming sits at the intersection of entertainment, social interaction and emergent digital commerce. That combination produces legitimate economic activity but also exploitable vulnerabilities. Current evidence confirms that microtransactions and gaming marketplaces have been misused for criminality, and there are plausible pathways for abuse by extremist actors — primarily for fundraising that supports content production, organization and livelihoods rather than direct operational terrorism financing. The principal barriers to better understanding and disruption are data opacity, jurisdictional fragmentation and definitional tensions between AML and CTF frameworks.
Addressing these challenges will require strategic research, clearer typologies and closer cooperation between regulators, investigators, multilateral bodies and industry. That work should be pragmatic:
- gather the investigative leads that already exist,
- translate them into operational indicators, and
- build tailored mechanisms for evidence preservation and platform cooperation.
Until those steps are taken, the gaming environment will remain a soft spot in global efforts to detect and disrupt illicit financing linked to violent extremism.
Acknowledgement
This summary draws on the RUSI webinar discussion featuring Gonzalo Saiz and Jessica Davis, PhD and the RUSI report discussed during that event.