AMLA ¦ Data Collection Exercise - Webinar for Sampled Entities

AMLA ¦ Data Collection Exercise - Webinar for Sampled Entities

AMLA launches EU-wide data collection to calibrate harmonized ML/TF risk assessment – practical guidance and key takeaways

The European Anti-Money Laundering Authority (AMLA) has initiated a large-scale data collection from a sample of roughly 5,000 credit and financial institutions across the EU. The primary aim is methodological: to test, refine and calibrate a harmonized money laundering and terrorist financing (ML/TF) risk assessment and selection methodology that can be applied consistently across sectors and jurisdictions. This exercise is not intended to produce supervisory conclusions about individual institutions. Instead, submitted data will feed into the development and calibration of two methodological strands – the selection model and the rescoring model – to ensure the future EU-level framework is operational, comparable, and proportionate.

Scope, sample composition and reporting level

The methodology is designed to cover the full population of obliged entities subject to AML/CFT supervision across the EU. AMLA’s approach recognizes sectoral differences: the risk assessment methodology will be developed separately for distinct categories of obliged entities to reflect different business models, exposures and risk drivers. The sample that will provide the calibration data blends two complementary approaches.

  1. A targeted component comprises institutions identified by national competent authorities (NCAs) as potentially eligible for AMLA’s direct supervision – ensuring supervisory relevance and inclusion of cross-border, high-impact entities.
  2. A statistically representative random sample targets roughly 5% of institutions within each sector per NCA, with caps and minimums to preserve proportionality and avoid overrepresentation.

Combined, these components yield the approximately 5,000 entities participating in this testing round.

A critical operational rule for this exercise is that reporting must be performed at solo level for each separate establishment – branches and subsidiaries report to their local supervisors; parent entities report excluding branch and subsidiary data. Consolidated reporting is explicitly excluded for this data collection and for future exercise phases.

Bastian Schwind-Wagner
Bastian Schwind-Wagner

"AMLA’s calibration exercise is a necessary step to create a consistent, data-driven ML/TF risk assessment across the EU. High-quality, timely and solo-level submissions will directly influence the usability and credibility of the resulting methodology.

Participating entities should prioritise internal data checks and clear documentation of any unavailable values to minimise resubmissions. National supervisors and AMLA will use the collected information solely for methodological testing and calibration, not for individual supervisory judgments."

AMLA provided clarity on the legal foundations that underpin mandatory participation for sampled entities. For institutions that are potentially eligible for direct AMLA supervision, the authority is empowered to carry out the periodic assessments that are necessary for developing the selection and risk assessment system. For non-eligible entities that are randomly sampled, AMLA is invoked, authorising the National Competent Authorities (NCAs) to provide AMLA with the information necessary to fulfil its methodological tasks. Given the calibration objective, exemptions are limited: entities that ceased operations during the reference year may be removed, but low activity, low risk or national-level materiality exemptions are not grounds for exclusion. This preserves the integrity and representativeness of the calibration dataset.

Timing and intended use of collected data

AMLA circulated the reporting package and scheduled submissions to NCAs by 22 April for the reference year 2025. After intake, NCAs will run first-level data quality and plausibility checks and forward the files to AMLA via the established supervisory data collection channel. AMLA will perform further consistency and usability checks across the EU dataset before methodological testing and calibration work in the months that follow. The deliverable from the calibration phase is not institution-level scores for the participants; instead, results will inform indicator definitions, reporting logic, thresholds and any necessary adjustments to the interpretative note and technical specifications ahead of a wider 2027 exercise.

Reporting architecture, templates and practical guidance

AMLA provided a detailed walkthrough of the reporting templates and submission workflow. The reporting package includes an Excel workbook structured to align with existing supervisory data standards used by other EU authorities, and an interpretative note that translates draft regulatory requirements into data point definitions and reporting rules. For this calibration exercise, entities will deliver Excel-based reports; longer-term plans envisage moving to a standardized data model with XBRL/semantic tagging for the operational regime.

The Excel workbook contains a data quality dashboard plus 32 single templates (sheets) covering basic information, customer profiles, products and services, country breakdowns and AML/CFT controls. Two auxiliary sheets carry admissible values and validation rules. The templates incorporate in-sheet instructions, drop-down menus for controlled values, and automated validation checks that flag errors (blocking submission) and warnings (non-blocking but to be reviewed). Workbook cells intended for reporting are locked to preserve processing integrity; modifying locked parts will lead to rejection as automated intake processes require unaltered templates.

Key instructions and common pitfalls highlighted by AMLA include:

  • Complete reporting at solo level for each establishment, including branches and subsidiaries separately to their NCAs; never submit consolidated figures.
  • Use the data quality dashboard to monitor template applicability, the presence of mandatory values, errors and warnings, and overall readiness for submission.
  • Treat errors as submission blockers and resolve them before sending. Warnings should be reviewed carefully; if retained, they signal that the reporting entity has verified the values despite the warning.
  • Report zeros when a data point is applicable and truly zero; leave cells blank when the data point is inapplicable to the entity or when the data exists but is not available. Blank but unavailable cells must be accompanied by a comment starting with the string “unable to report” and a short explanation in the comments template.
  • Populate country-breakdown tables only for jurisdictions where the entity has relevant activity. To save time, copy the country list from the template’s lists sheet into an external file and paste back only the jurisdictions that apply.
  • Do not alter locked workbook structure or validations. If a validation produces false positives, notify the responsible NCA so AMLA can assess whether the validation logic requires correction.

AMLA clarified how to apply definitions for this exercise given the legislative timing mismatch: many of the AMLA/AMLD definitions referenced in the interpretative note will only formally apply from July 2027. For the 2025 reference year reporting, entities should use definitions and legal standards applicable during that period (e.g., previous AMLD transpositions) where direct conflict exists. Where possible, report according to the interpretative note derived from the eventual AMLA definitions, but prioritize compliance with binding law in the reference period. AMLA emphasized that the interpretative note and reporting package for the 2025 calibration are final for this round but will be updated for the 2027 exercise taking into account lessons learned.

Language support, communications and supervisory assistance

The reporting package will be published in English as the authoritative version. To assist entities that need it, AMLA is preparing machine translations into all EU languages, which NCAs may circulate for convenience but which will not be legally authoritative. AMLA will not engage directly with individual reporting entities at scale; NCAs remain the primary contact points for upload, validation, and queries. AMLA will collect queries via a centralized submission link and publish FAQs and clarifications where relevant. Targeted bilateral follow-up via supervisory channels may occur for data clarification or to request resubmissions, but this will be limited and focused on improving data usability for methodological purposes.

What happens after submission

Once the cut-off for resubmissions is reached in May, AMLA and NCAs will proceed to the analytical phase. Submitted data will be assessed for completeness, internal consistency and usability. Methodological testing will validate whether indicators, definitions and reporting logic operate as intended in practice, and calibration will refine weights, thresholds and scoring rules in both the selection and the rescoring models. Follow-ups for clarification or targeted data corrections may be requested via NCAs. AMLA does not plan to publish entity-specific scores or benchmarking outputs from this calibration exercise; any follow-up will be methodological and aimed at improving data quality and the robustness of the EU-wide framework.

Implications for obliged entities and supervisors

This calibration exercise is an important milestone toward a harmonized EU ML/TF risk assessment and selection framework. For obliged entities, it is an early signal that future supervisory practice at EU level will be more data-driven, standardized and sector-aware. Quality, completeness and timeliness of reporting are crucial to ensure the methodology is fit for purpose and proportionate in its future application. For national supervisors, the exercise provides both assurance that AMLA’s approach will be tested on representative data and an opportunity to flag sectoral or national particularities that may require differentiated treatment in the final methodology.

Concluding note

AMLA stressed the collaborative nature of this work – the authority needs high-quality inputs from both NCAs and the sampled entities to build an operational and comparable EU-wide framework that supports targeted, proportionate supervision. The calibration phase is the bridge between design and operational reality; the robustness of the final risk assessment and selection models will depend on careful preparation, thorough internal checks by reporting entities, timely NCAs’ validations and constructive feedback loops during the analytical stage.

Acknowledgement

This summary draws on the Anti-Money Laundering Authority (AMLA) presentation, which features the presenters.

Talk copyright holder(s): Anti-Money Laundering Authority (AMLA)
The information in this article is of a general nature and is provided for informational purposes only. If you need legal advice for your individual situation, you should seek the advice of a qualified lawyer.
Did you find any mistakes? Would you like to provide feedback? If so, please contact us!
Bastian Schwind-Wagner
Bastian Schwind-Wagner Bastian is a recognized expert in anti-money laundering (AML), countering the financing of terrorism (CFT), compliance, data protection, risk management, and whistleblowing. He has worked for fund management companies for more than 24 years, where he has held senior positions in these areas.