Robust yet fragmented: EU’s AI legal framework faces sector-specific challenges

Despite its intent, the EU’s regulatory approach is being undermined by cross-regulatory fragmentation and inconsistent sectoral implementation, the study finds. One major issue is the lack of harmonized standards across energy subdomains, generation, transmission, distribution, consumption, and trading, each governed by separate legal and regulatory bodies.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 08-05-2025 17:59 IST | Created: 08-05-2025 17:59 IST
Robust yet fragmented: EU’s AI legal framework faces sector-specific challenges
Representative Image. Credit: ChatGPT

A new peer-reviewed study reveals regulatory gaps and fragmented governance are complicating the deployment of AI technologies in the European Union's energy sector. Titled “Regulating AI in the Energy Sector: A Scoping Review of EU Laws, Challenges, and Global Perspectives” and published in Energies (2025), the study provides the first comprehensive scoping review of existing EU legislation, highlighting both the potential and the pitfalls of the EU's risk-based approach under the Artificial Intelligence Act (AI Act), along with related frameworks such as the General Data Protection Regulation (GDPR), the NIS2 Directive, and the Cyber Resilience Act.

What is the EU’s approach to AI governance in the energy sector?

The study finds that the EU’s approach is anchored in a risk-based regulatory philosophy outlined in the AI Act, which categorizes AI applications based on the level of risk they pose - from minimal to unacceptable. In the energy sector, where applications span from smart grid optimization to predictive maintenance and cybersecurity, most fall into the “high-risk” category due to their potential impact on critical infrastructure and public welfare.

This risk-tiered framework is complemented by horizontal and sector-specific laws. Horizontal laws include GDPR and the Cyber Resilience Act, while sectoral governance comes from agencies like ACER (Agency for the Cooperation of Energy Regulators) and ENISA (European Union Agency for Cybersecurity). These layers are designed to build resilience, protect privacy, and ensure data integrity across AI-driven energy systems.

What challenges are emerging from the current legal structure?

Despite its intent, the EU’s regulatory approach is being undermined by cross-regulatory fragmentation and inconsistent sectoral implementation, the study finds. One major issue is the lack of harmonized standards across energy subdomains, generation, transmission, distribution, consumption, and trading, each governed by separate legal and regulatory bodies.

Furthermore, the coexistence of overlapping laws, such as GDPR and the Data Governance Act, introduces ambiguity around data usage, anonymization, and cybersecurity requirements. These conflicts often force energy actors to navigate “legal grey zones,” particularly when deploying AI tools that involve data sharing between Transmission System Operators (TSOs), Distribution System Operators (DSOs), and private vendors.

The study also identifies a deficit in regulatory agility. Emerging technologies like federated learning, edge AI, and digital twins lack tailored guidance, delaying their scale-up and adoption. This mismatch between regulatory clarity and technological innovation is slowing down pilot projects and deterring investment in AI-heavy energy platforms.

How does regulatory complexity affect public trust and innovation?

The authors argue that while stringent regulations are essential for safeguarding public interest, they must not come at the expense of innovation and public trust. The review concludes that well-structured governance can become an enabler rather than an obstacle when transparency, accountability, and explainability are prioritized.

Public trust hinges on how well regulators communicate the trade-offs between efficiency, privacy, and control. For example, consumers interacting with AI-enabled smart meters or demand-response systems often lack awareness of how their data is processed or decisions are made. Without explainable AI and participatory governance models, this opacity could breed resistance and skepticism.

To bridge this gap, the study calls for enhanced stakeholder collaboration, including industry actors, regulators, and civil society, to co-design governance strategies. It also recommends increased investment in regulatory sandboxes to test and iterate AI applications in controlled environments.

How do EU regulations compare globally?

The study draws a comparative lens to examine AI governance in the United States and China. While the EU takes a precautionary, rights-based approach, the U.S. leans towards a market-driven, sectoral model, relying on existing laws such as the Federal Trade Commission Act. China, on the other hand, imposes state-centric mandates via the Personal Information Protection Law (PIPL) and the Algorithmic Recommendation Guidelines, emphasizing national security and public order.

These contrasting models reflect differing values, human rights and transparency in the EU, market efficiency in the U.S., and centralized control in China. The authors argue that global convergence on AI norms in the energy sector remains unlikely in the short term, making interoperability and cross-border collaboration essential challenges for EU policymakers.

What are the implications and future directions?

The study posits that the EU’s regulatory regime, if made more coherent, can become a global benchmark for ethical, inclusive, and innovation-driven AI deployment in energy systems.

Key recommendations include:

  • Clarifying sector-specific obligations under the AI Act through delegated acts and technical standards.
  • Enhancing coordination between data protection authorities, energy regulators, and AI oversight bodies.
  • Investing in capacity-building for energy stakeholders to understand and implement compliance mechanisms.
  • Developing metrics for assessing algorithmic risk, transparency, and resilience within energy infrastructures.
  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback