Regulatory gaps emerge as EU banks deploy AI under conflicting AML and AI rules
Under the revised EU AML/CFT package, institutions are expected to adopt more sophisticated, proactive approaches to identifying financial crime. AI fits neatly into this expectation, even though the legislation rarely names it explicitly. The result, Brewczyńska argues, is a regulatory environment that implicitly encourages AI deployment without clearly defining how algorithmic systems should be governed in practice.
European financial institutions are accelerating their use of artificial intelligence to detect money laundering, terrorist financing, and complex fraud networks, driven by regulatory pressure, rising transaction volumes, and increasingly sophisticated criminal techniques. Machine learning systems now screen customers, flag suspicious transactions, and support Financial Intelligence Units across the bloc. However, Europe’s regulatory framework is struggling to keep pace with the legal and ethical consequences of automated compliance.
A study titled Combatting Financial Crime with AI at the Crossroads of the Revised EU AML/CFT Regime and the AI Act, published in the New Journal of European Criminal Law, examines how the European Union’s revamped anti-money laundering and counter-terrorism financing framework intersects with the EU AI Act, revealing gaps, overlaps, and unresolved conflicts that could shape the future of AI-driven financial regulation.
AI becomes central to Europe’s financial crime controls
Financial institutions increasingly rely on algorithmic tools to meet regulatory expectations for continuous monitoring, risk-based customer due diligence, and rapid detection of suspicious behavior. Traditional rule-based systems struggle with false positives and volume, while AI promises adaptive pattern recognition across large datasets.
Under the revised EU AML/CFT package, institutions are expected to adopt more sophisticated, proactive approaches to identifying financial crime. AI fits neatly into this expectation, even though the legislation rarely names it explicitly. The result, Brewczyńska argues, is a regulatory environment that implicitly encourages AI deployment without clearly defining how algorithmic systems should be governed in practice.
This implicit encouragement creates a structural dependency. Banks and other obliged entities face pressure to modernize compliance operations to avoid enforcement actions and reputational damage. AI systems offer speed and scalability, but their growing role shifts decision-making power away from human compliance officers and toward opaque technical processes. That shift raises legal questions that neither the AML/CFT framework nor the AI Act fully resolves on its own.
The paper highlights that AI-driven tools are now deeply embedded in core compliance functions. These include transaction monitoring systems that flag unusual patterns, customer risk scoring models that influence onboarding decisions, and automated alerts that shape reporting to national authorities. In many cases, these systems directly affect individuals’ access to financial services, making them consequential from a fundamental rights perspective.
Where the AML/CFT framework and the AI Act collide
The revised AML/CFT framework aims to strengthen financial crime prevention through harmonization and centralized oversight, while the AI Act introduces a horizontal, risk-based governance system for artificial intelligence across sectors. Although both frameworks share a commitment to protecting fundamental rights, they approach AI from different starting points.
The AI Act classifies certain AI systems as high-risk, triggering strict obligations related to transparency, human oversight, data governance, and accountability. Financial crime detection tools might appear to fall squarely within this category. However, the study shows that exemptions, carve-outs, and ambiguous definitions complicate that assumption.
Some AI systems used in AML/CFT contexts may avoid high-risk classification under the AI Act, particularly when framed as fraud detection or used by public authorities such as Financial Intelligence Units. This creates a regulatory gray zone where powerful AI tools operate with fewer safeguards than expected. Brewczyńska argues that this outcome undermines the AI Act’s human-centric goals.
The AML/CFT framework, meanwhile, emphasizes effectiveness and rapid response but offers limited guidance on what constitutes meaningful human oversight when AI systems are involved. Requirements for human intervention are often framed broadly, leaving room for interpretation. In practice, this can result in nominal oversight that does little to counterbalance automated decision-making.
The paper identifies a deeper structural problem: the two regimes were developed largely in parallel, with limited coordination. As a result, key concepts such as explainability, accountability, and contestability are treated differently across the frameworks. Financial institutions attempting to comply with both face uncertainty about which standards apply and how conflicts should be resolved.
This uncertainty has real consequences. Individuals flagged by AI-driven AML systems may face account freezes, transaction delays, or enhanced scrutiny without clear explanations or effective avenues for redress. The study warns that such outcomes risk eroding trust in financial institutions and the broader regulatory system.
Fundamental rights, accountability, and the future of AI in AML
When an AI system flags a transaction or categorizes a customer as high risk, responsibility is dispersed across developers, deployers, and regulators. The aauthor argues that current EU frameworks do not fully address this redistribution of responsibility. The AML/CFT regime focuses on institutional obligations, while the AI Act emphasizes system-level risk management. Neither fully resolves who is accountable when AI-driven decisions cause harm or infringe rights.
The study also highlights the risk of normalization. As AI systems become standard tools in compliance, their outputs may be treated as objective or authoritative, reducing critical scrutiny by human operators. This dynamic can weaken safeguards, especially when institutions prioritize efficiency and regulatory compliance over individual fairness.
The paper calls attention to the need for clearer alignment between AML/CFT rules and AI governance. This includes consistent definitions of automated decision-making, stronger requirements for explainability in high-impact contexts, and clearer standards for human oversight that go beyond formal review processes.
- FIRST PUBLISHED IN:
- Devdiscourse

