09 January 2026
AI Use by Directors and Boards
Early insights into dealing with and preventing financial crime
Boards and individual directors are increasingly experimenting with AI tools to prepare for meetings, surface strategic insights and monitor organisational risks. While many of these applications can improve the speed and breadth of oversight, they also create new channels through which financial crime risks can emerge or be obscured. For financial crime practitioners and those advising boards, it is now essential to understand not only how management deploys AI in operations but how directors and boards are using AI in their governance role – and what that means for detection, prevention and legal accountability.
Jump to: Luxembourg-specific considerations
Before the board meeting – preparation, data flows and the risk of shadow analysis
Directors commonly use AI to summarise long board packs, draft questions for management and research sector developments. Closed, enterprise-grade AI systems trained on internal materials can bring value by surfacing institutional memory and highlighting historical precedents relevant to an agenda item. But where directors use public or unvetted generative AI, sensitive content can be exposed to third-party training processes or retained by vendors, creating confidentiality breaches and potential loss of privilege. From a financial crime perspective, this is a meaningful risk: board-level discussions may reference investigations, suspicious activity reports, sanctions exposure or remediation plans. Unauthorised dissemination of such content can compromise ongoing investigations, alert subjects of inquiry and create discoverability risks in litigation or regulator enquiries.
Equally important is the danger of over-reliance on AI-generated summaries. Regulators expect directors to exercise care and diligence; relying on a model’s summary without reading the underlying materials risks missing nuance in compliance reports or audit findings that indicate money laundering, bribery or fraud. Boards should therefore require clear protocols about which AI tools may be used with board materials, mandate human verification of AI outputs and ensure document retention and destruction policies cover AI inputs and outputs.
In the boardroom – recordings, real-time analysis and chilling effects on frank discussion
AI note-takers and agentic assistants promise efficient, searchable records and real-time analytics. For financial crime oversight this can be attractive: instantaneous flagging of anomalous patterns, on-demand access to prior deliberations about risk appetite, or speedier follow-ups to compliance queries. However, audio capture, automated transcripts and AI-produced minutes introduce several hazards. Transcripts can be inaccurate, may fail to capture context, and when retained by vendors can become discoverable evidence in enforcement proceedings. Moreover, the knowledge that discussions are being transcribed or analysed by AI can inhibit free and frank exchange, reducing the willingness of directors or witnesses to surface uncomfortable but material concerns about suspicious transactions or internal control weaknesses.
Boards must weigh any benefit of real-time AI analytics against legal and cultural risks. Consent and surveillance laws vary by jurisdiction and boards should obtain informed consent before recording. If AI tools are used live, they must be deployed in closed, enterprise-controlled environments, with strict access controls, encryption, and retention rules. Any summaries or minutes derived from AI should be treated as drafts requiring human review and sign-off, particularly where the content touches financial crime matters.
After the board meeting – post-event analysis, evaluations and investigative risks
AI tools for board evaluation and post-mortem analysis can identify patterns in time allocation, questioning and decision outcomes. When applied to governance reviews of financial crime oversight, AI can help detect whether compliance and risk matters receive sufficient attention and whether recommendations from audit, compliance or external investigators are being actioned. Yet these same tools can also surface sensitive identifiers or patterns that, if retained improperly, could jeopardise confidentiality or breach privacy laws. Boards using AI for evaluation should ensure outputs are appropriately anonymised when needed, that datasets used are limited to the minimum necessary, and that legal privilege and investigative sensitivities are protected.
Strategic use cases and their financial crime implications
There are sound, low-risk use cases for boards relevant to financial crime:
- Institutional memory and historical precedent: internal, closed models can quickly retrieve prior board deliberations on remediation, enforcement settlements or sanction screenings and help shape consistent responses. The model must never be exposed to public training pipelines and access must be auditable.
- Audit committee analytics: machine learning models can systematically surface anomalies across financial reports and audit files that merit further investigation. These tools should complement, not replace, forensic analysis and must have explainability features so directors can demand the basis for flagged anomalies.
- Scenario planning and stress testing: generative AI can accelerate scenario design for sanctions exposure, correspondent banking risks or adverse media events that may reveal compliance blind spots. Outputs should be treated as prompts for deeper human-driven modeling rather than final risk assessments.
- Investor and stakeholder lens: persona-based models can help anticipate investor questions about anti-money laundering programs or governance of compliance failures. Boards should ensure that such models do not replace direct stakeholder engagement or legal scrutiny.
Key governance controls boards must adopt to reduce financial crime exposure
Boards should adopt a set of minimum controls before permitting any collective or individual director use of AI:
- Policy alignment and documentation: explicitly include board and director AI use in the organisation’s AI register and risk frameworks. Define acceptable tools and prohibit use of public, unvetted generative AI for any board material that contains sensitive compliance or investigative content.
- Role-based access and secure environments: provide directors with enterprise-grade, closed AI workspaces that restrict data flows, prevent vendor re-use of inputs for training, enforce strong identity controls and log all access and prompts for auditability.
- Retention, privilege and evidence management: extend document retention, archival and destruction policies to AI prompts, responses, transcripts and intermediate artifacts. Preserve legal privilege by prohibiting the use of AI to seek legal advice and by ensuring privileged materials never enter public AI systems.
- Human-in-the-loop and verification: mandate human review and sign-off for AI-generated minutes, summaries or recommendations. Require explainability for any AI that influences committee decisions about financial crime oversight.
- Training and minimum literacy: ensure directors have targeted training to understand AI limitations – including hallucination, bias and opacity – and can critically interrogate model outputs, especially when financial crime indicators are at stake.
- Incident response and escalation thresholds: set thresholds for escalation when AI flags potential financial crime indicators, with clear liaison points among the chair, company secretary, general counsel, head of compliance and internal audit.
Potential legal and regulatory consequences for directors
Directors’ duties of care and diligence and duties to act in the company’s best interests do not change because AI is the tool used. Regulators are increasingly clear that directors must still read and interrogate key materials. Relying on AI summaries in place of due diligence is unlikely to be viewed as adequate care. Moreover, improper handling of confidential compliance materials through unvetted AI could create secondary exposures: regulatory findings for inadequate controls, compromises to investigations and reputational or civil liabilities. Boards should therefore ensure that governance of director AI use aligns with their existing statutory and fiduciary obligations.
Evolving boardroom dynamics – preserving oversight without shadowing management
AI can amplify board effectiveness, but it also risks blurring the line between oversight and operational management if directors use AI to probe beyond strategic questions into detailed operational execution. Directors must avoid becoming de facto investigators or operational managers; where deeper inquiry is needed, issues should be routed to management, internal audit or external advisers. Chairs and company secretaries have an important gatekeeping role: setting norms, sequencing AI inputs so members contribute views before seeing AI outputs and preserving the board’s deliberative character.
Practical checklist for financial crime-focused boards (summary guidance)
Boards charged with overseeing financial crime risk should:
- Prohibit use of public generative AI for any board materials that include investigative, compliance or privileged information.
- Provide vetted, closed AI tools for directors when needed, with role-based access and vendor commitments that inputs will not be reused or retained beyond allowed purposes.
- Require human verification for all AI outputs used in decision-making and ensure explainability or provenance for any anomaly flagged in financial or compliance reports.
- Update retention, privilege and incident response policies to encompass AI inputs, transcripts and outputs.
- Train directors on AI limitations and how to interpret model outputs in a compliance context.
- Establish clear escalation paths when AI surfaces potential financial crime indicators, with protocols for preserving evidence and coordination with legal counsel and regulators.
Luxembourg specific considerations
In Luxembourg, the use of AI by boards intersects directly with a highly supervised financial centre and a regulatory culture that places strong emphasis on documented governance, effective oversight and individual accountability. Banks, investment funds, management companies, PSFs and FinTechs supervised by the CSSF operate within an EU AML framework that already expects mature controls around data handling, outsourcing and decision-making. Board-level use of AI, even for preparatory or analytical purposes, is therefore not a neutral activity and must be considered part of the institution’s overall governance and risk control system.
From a supervisory perspective, the CSSF has consistently focused on the effectiveness of boards in overseeing AML/CFT/CPF risks, including their understanding of information flows, reliance on third parties and documentation of key decisions. In this context, the use of AI by directors raises questions the CSSF is likely to examine during on-site inspections or thematic reviews: how AI tools are governed, whether their use is formally approved, how confidentiality and professional secrecy are preserved, and whether reliance on AI affects the board’s ability to demonstrate informed and independent judgment. The CSSF’s broader expectations on outsourcing, ICT risk management and data protection are also relevant where AI tools involve external providers.
Practically, Luxembourg regulated entities should ensure that any AI tools made available to directors are covered by internal policies, mapped in governance documentation and aligned with existing AML/CFT/CPF and ICT risk frameworks. Evidence of human review, traceability of inputs and outputs, and clear retention rules will be important to demonstrate control. Where AI is used to support oversight of financial crime risks, boards should be able to show that it supplements, rather than replaces, established reporting, escalation and investigative processes.
These considerations can be integrated into existing governance, compliance and internal control arrangements without altering their underlying structure, provided responsibilities and documentation are clearly defined.
Conclusion – a cautious, governed approach that protects oversight
AI offers boards tangible tools to enhance the oversight of financial crime risks, from improved access to institutional memory to faster detection of anomalies. Yet these benefits come with new confidentiality, legal and cultural hazards.
The priority for boards should be to adopt a cautious, governed approach:
- recognise informal ‘shadow’ use and regulate it;
- provide secure, auditable tools when collective use is justified;
- preserve human judgment as the final arbiter; and
- ensure policies and training keep pace.
For financial crime prevention, that approach is essential to protect investigations, maintain privilege, and ensure directors meet their duties while benefiting from AI’s capabilities.