AI bias cannot be fixed by regulation alone: Here's why


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 30-03-2026 07:17 IST | Created: 30-03-2026 07:17 IST
AI bias cannot be fixed by regulation alone: Here's why
Representative image. Credit: ChatGPT

Artificial intelligence (AI) systems deployed across critical sectors such as hiring, finance, healthcare, and security continue to produce biased outcomes despite operating within existing regulatory frameworks, raising fresh concerns about the effectiveness of current governance models. A new study finds that bias in AI is not an isolated technical flaw but a systemic issue embedded across the entire lifecycle of these systems.

Published in Information, the study titled “Systemic Data Bias in Real-World AI Systems: Technical Failures, Legal Gaps, and the Limits of the EU AI Act” presents an in-depth cross-sectoral analysis of how bias originates, evolves, and persists in real-world AI deployments. The research critically evaluates the European Union’s AI Act, concluding that its risk-based regulatory framework fails to address the dynamic and cumulative nature of bias in complex socio-technical systems.

Bias emerges early and spreads across the AI lifecycle

Bias in AI systems begins long before deployment and cannot be understood as a single-stage failure. Instead, it originates at the earliest stages of the AI lifecycle, particularly during data collection and annotation, where historical inequalities and representational distortions are embedded into training datasets.

These initial biases are then carried forward into model development, where algorithmic design choices can amplify existing distortions. As systems move into evaluation and deployment, these biases become operationalized, influencing real-world decisions in ways that may reinforce existing disparities.

The research highlights that this lifecycle-driven propagation of bias is often invisible within current governance frameworks. By the time biased outcomes appear in deployment, the underlying causes are deeply embedded and difficult to trace back to their origin.

Across sectors, the study identifies recurring patterns. In employment systems, bias emerges from historical hiring data that reflects entrenched social inequalities. In credit scoring, financial histories and proxy variables can encode socio-economic disparities. In healthcare, data imbalances can affect diagnostic accuracy across different patient groups. In biometric systems, demographic biases in datasets can lead to unequal performance. In autonomous systems, training data and environmental assumptions can introduce systemic errors.

These patterns reveal that bias is not confined to specific use cases but is a structural feature of data-driven systems. The study emphasizes that treating bias as a localized technical issue fails to capture its broader socio-technical dimensions.

EU AI Act falls short in addressing structural bias

While the EU AI Act is widely regarded as one of the most comprehensive attempts to regulate artificial intelligence, the study finds that its framework is limited in addressing systemic bias.

The Act adopts a risk-based classification system, categorizing AI applications based on their potential impact. High-risk systems are subject to stricter requirements, including documentation, transparency, and compliance obligations. However, the study argues that this approach treats risk as a static attribute rather than a dynamic process that evolves throughout the lifecycle of an AI system.

This static perspective creates significant blind spots. Bias does not remain confined to predefined categories but can emerge, shift, and intensify over time. As a result, systems that meet regulatory requirements at one stage may still produce biased outcomes later in their lifecycle.

The study identifies several key governance gaps within the EU AI Act. One major issue is the limited auditability of datasets. While the Act requires documentation, it does not mandate comprehensive auditing of training data, leaving critical sources of bias insufficiently examined. Another gap lies in the absence of standardized fairness metrics. Without clear benchmarks, organizations have flexibility in defining and measuring fairness, leading to inconsistent practices and outcomes.

Post-deployment monitoring is also identified as a weak point. Once systems are deployed, there is limited oversight to track how they perform in real-world conditions, allowing biases to persist or even worsen over time. Accountability remains fragmented, with unclear divisions of responsibility between developers, deployers, and users. This fragmentation complicates efforts to identify and address bias when it arises.

The study also highlights issues of transparency, noting that compliance requirements often focus on documentation rather than meaningful insight into system behavior. As a result, systems may appear compliant without being genuinely accountable.

Socio-technical dynamics reinforce bias beyond compliance

The study focuses on the interaction between technical mechanisms, human behavior, and institutional structures in shaping AI outcomes. Bias is not produced solely by algorithms or data. It is reinforced through socio-technical processes, including human reliance on automated systems, organizational practices, and feedback loops that stabilize biased patterns over time.

For example, when decision-makers rely heavily on AI outputs, they may inadvertently reinforce biased recommendations, embedding them into organizational routines. Over time, these patterns can become institutionalized, making bias more difficult to detect and correct.

The study describes how technical bias mechanisms, socio-technical amplification, and regulatory gaps interact to produce persistent and systemic bias. These overlapping dynamics create feedback-driven effects, where biased outputs influence future data collection and model training, perpetuating the cycle.

This perspective challenges the notion that compliance with regulations is sufficient to ensure fairness. Even systems that meet formal requirements can produce discriminatory outcomes if underlying socio-technical dynamics are not addressed.

The research argues that effective governance must account for these interactions, moving beyond isolated technical fixes or legal constraints. Addressing bias requires a holistic approach that considers the entire lifecycle of AI systems and the contexts in which they operate.

Rethinking AI governance beyond static regulation

Current governance models are fundamentally misaligned with the realities of AI systems. By treating bias as a technical defect and risk as a static classification, existing frameworks fail to capture the dynamic and interconnected nature of AI-driven decision-making. To address these limitations, the authors propose a shift toward lifecycle-oriented governance. This approach emphasizes continuous monitoring, integration of fairness considerations into system design, and alignment between technical development and legal accountability.

Rather than applying regulation as an external constraint, the study advocates for embedding governance mechanisms directly into the development process. This includes integrating bias detection and mitigation strategies at every stage of the lifecycle, from data collection to deployment and beyond.

The research also calls for stronger coordination between technical and legal domains. Bridging this gap can help ensure that regulatory frameworks are informed by an accurate understanding of how AI systems function in practice. For policymakers, the findings highlight the need to move beyond compliance-based approaches toward more adaptive and context-aware governance models. For organizations, they underscore the importance of addressing bias as a systemic issue rather than a one-time problem.

Implications for high-stakes sectors

The cross-sectoral analysis underscores the urgency of addressing bias in AI systems used in high-stakes environments. In employment, biased hiring algorithms can reinforce existing inequalities in labor markets. In finance, discriminatory credit scoring can limit access to economic opportunities. In healthcare, biased predictive models can affect patient outcomes and exacerbate disparities. In biometric systems, unequal performance can lead to misidentification and security risks. In autonomous systems, biased decision-making can have safety implications.

These risks highlight the broader societal impact of systemic bias in AI. As these technologies become more deeply integrated into everyday life, their influence on economic, social, and institutional outcomes will continue to grow. The study warns that without significant changes to governance approaches, these biases may become entrenched, creating long-term challenges for equity and accountability.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback