EU AI Act risks failure without strong enforcement capacity

According to the study, the future of AI governance in Europe depends far more on enforcement capacity than on legislative design. While the EU AI Act has been widely recognized as a landmark regulatory framework, experts involved in the research expressed skepticism that its ambitions can be realized without sustained institutional strength, resources, and political backing.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 22-12-2025 09:49 IST | Created: 22-12-2025 09:49 IST
EU AI Act risks failure without strong enforcement capacity
Representative Image. Credit: ChatGPT

European policymakers are facing a narrowing window to bring artificial intelligence under democratic control as the technology accelerates faster than regulatory systems can adapt. From generative models embedded in public services to AI-driven decision-making in finance, policing, and healthcare, the pace of deployment is stretching the limits of existing governance frameworks. While the European Union has positioned itself as a global leader in AI regulation, growing concern is emerging over whether laws alone can keep pace with rapid technological change.

That concern is at the center of the study Governing Rapid Technological Change: Policy Delphi on the Future of European AI Governance, published as a 2024 research paper examining expert assessments of the EU’s readiness to govern artificial intelligence. The study draws on a structured Policy Delphi involving experts from policymaking institutions, academia, civil society, think tanks, and industry to evaluate the strengths and weaknesses of current and future European AI governance.

Why enforcement, not legislation, is the core challenge

According to the study, the future of AI governance in Europe depends far more on enforcement capacity than on legislative design. While the EU AI Act has been widely recognized as a landmark regulatory framework, experts involved in the research expressed skepticism that its ambitions can be realized without sustained institutional strength, resources, and political backing.

Participants broadly agreed that risk-based regulation, technology-neutral legal language, and harmonized EU-wide rules are desirable features of AI governance. However, these attributes were not seen as sufficient on their own. The real vulnerability lies in the ability of institutions to interpret, update, and enforce rules as AI systems evolve. Experts repeatedly pointed to the risk that regulation could become outdated or symbolic if enforcement bodies lack the authority or expertise to respond to new use cases.

The study highlights the European AI Office and national supervisory authorities as critical pressure points. These institutions are expected to oversee compliance, coordinate enforcement, and respond to emerging risks. Yet experts questioned whether they will receive adequate funding, staffing, and technical expertise to perform these roles effectively over time. Without these capabilities, even well-designed regulation may struggle to influence real-world AI development and deployment.

Another concern raised is regulatory fragmentation at the implementation level. While EU-level rules aim to ensure consistency, enforcement will still rely heavily on national authorities. Experts warned that uneven capacity across member states could lead to inconsistent application of AI rules, creating loopholes and regulatory arbitrage. This risk is particularly acute given the cross-border nature of AI systems and the dominance of large multinational technology firms.

The study also finds that adaptability is often misunderstood in regulatory debates. Flexibility clauses, regulatory sandboxes, and technology-neutral wording are frequently promoted as solutions to rapid change. Yet experts ranked these tools as less important than continuous institutional learning and active rule revision. Adaptability, in this view, is not built into legal text alone but emerges from governance practice over time.

The growing gap between what is needed and what is likely

The study identifies a clear gap between what experts believe is necessary for effective AI governance and what they consider politically and institutionally likely to happen. This desirability–probability gap cuts across multiple dimensions of AI regulation.

International coordination on AI governance was rated as highly desirable but unlikely to materialize at the scale required. Experts pointed to geopolitical competition, divergent regulatory cultures, and strategic rivalry between major powers as barriers to meaningful global alignment. As AI development becomes increasingly tied to economic competitiveness and national security, cooperation is expected to remain limited.

Similarly, strong mechanisms for public participation and democratic oversight of AI systems were widely supported in principle but seen as difficult to implement in practice. Experts cited time constraints, technical complexity, and limited public awareness as obstacles to meaningful citizen involvement in AI governance. This raises concerns about legitimacy, particularly as AI systems increasingly affect fundamental rights and access to public services.

The study also addresses the role of industry self-regulation, which experts overwhelmingly rejected as an adequate solution. Voluntary standards, codes of conduct, and ethics guidelines were viewed as insufficient to address systemic risks or power imbalances in the AI ecosystem. While such measures may complement formal regulation, they were not seen as substitutes for binding rules and credible enforcement.

This gap between ambition and feasibility reflects deeper structural constraints within democratic governance. Legislative processes are inherently slow, consensus-driven, and subject to political compromise. AI development, by contrast, is fast, iterative, and driven by private sector incentives. The study suggests that unless governance systems are explicitly designed to manage this mismatch, regulation will struggle to remain relevant.

Power concentration and the limits of self-regulation

The study expresses concern over the concentration of power within the AI ecosystem. Experts identified the dominance of a small number of large technology companies, particularly in computing infrastructure, data access, and foundational models, as one of the most pressing governance challenges facing Europe.

This concentration limits competition, constrains regulatory leverage, and increases dependence on non-European providers. Experts expressed strong support for policies aimed at reducing this imbalance, including stricter antitrust enforcement, investment in public digital infrastructure, and support for open and interoperable AI systems.

Governance challenges extend beyond individual AI applications to the structural level of the digital economy. Control over cloud infrastructure, high-performance computing, and large-scale data resources shapes who can develop advanced AI and under what conditions. Without addressing these upstream dependencies, downstream regulation may have limited impact.

Experts also warned against overreliance on market-driven solutions to governance problems. While innovation incentives are important, unchecked market concentration risks undermining democratic accountability and public interest objectives. The study argues that public investment and coordinated industrial policy are necessary complements to regulation, particularly if Europe aims to maintain technological sovereignty.

Another critical issue raised is the risk of regulatory capture. As AI systems become more complex, regulators may become increasingly reliant on industry expertise to understand and assess risks. Without safeguards, this dependence could weaken oversight and tilt governance in favor of dominant actors. Strengthening public sector expertise in AI was therefore identified as a priority for future governance.

The study places particular emphasis on the role of enforcement culture. Effective governance requires not only legal authority but also willingness to act, even when enforcement may challenge powerful economic interests. Experts stressed that political commitment to enforcement must be sustained over time, beyond initial regulatory announcements.

The research delivers a sobering message for European AI policy. Regulation alone will not future-proof AI governance. Laws such as the EU AI Act represent an important foundation, but their impact will depend on the less visible work of building institutions, developing expertise, and maintaining political will. Without these elements, governance risks falling behind technological reality.

At the same time, the study does not suggest that effective AI governance is unattainable. Instead, it reframes the challenge as one of long-term capacity building rather than one-off legislative success. By focusing on enforcement, institutional learning, and power dynamics within the AI ecosystem, the research offers a clearer picture of what it will take for Europe to govern artificial intelligence in a way that is both democratic and resilient.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback