Balancing AI and human judgment: Addressing automation bias in high-risk systems
The AIA takes a risk-based approach to AI regulation, classifying AI systems according to their potential societal and ethical risks. High-risk AI systems, which include those used in recruitment, healthcare, and legal decision-making, must have human oversight to ensure accountability.
Artificial Intelligence (AI) is transforming industries, making decision-making more efficient, yet it also introduces risks, particularly through automation bias (AB). Automation bias occurs when humans overly rely on AI-generated outputs, sometimes disregarding critical evaluation. This issue is explicitly acknowledged in the European Union’s Artificial Intelligence Act (AIA), which mandates human oversight for high-risk AI systems.
A recent study titled "Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AI" by Johann Laux and Hannah Ruschemeier, published as part of the "Emerging Laws of Oversight" project, critically examines how automation bias is addressed within the AIA and explores its legal enforceability.
Role of automation bias in AI regulation
The AIA takes a risk-based approach to AI regulation, classifying AI systems according to their potential societal and ethical risks. High-risk AI systems, which include those used in recruitment, healthcare, and legal decision-making, must have human oversight to ensure accountability. However, human oversight itself is prone to automation bias, as individuals tend to place undue trust in automated decisions, assuming that AI-generated results are inherently more reliable. The AIA attempts to mitigate this by requiring AI providers to enable awareness of AB, ensuring that human overseers remain vigilant. Nevertheless, the study argues that merely mandating awareness does not sufficiently address the complex psychological and systemic factors contributing to AB.
The study highlights a fundamental issue in the legal framework: the division of responsibility between AI providers and deployers. Under the AIA, AI providers must design systems that allow human oversight to be aware of automation bias. However, the deployers - those who use these AI systems in practice - are ultimately responsible for implementing effective human oversight. The problem arises because automation bias is influenced not only by system design but also by organizational culture, training, workload, and situational context. While providers can introduce safeguards, deployers play a crucial role in shaping how AI decisions are interpreted and acted upon. This legal gap creates uncertainty about whether responsibility for preventing automation bias should lie more with those who create AI or those who use it.
The challenge of legal enforcement
Enforcing the AIA’s mandate to prevent automation bias presents another challenge. Awareness of automation bias is a subjective state, making it difficult to measure and legally verify compliance. How can regulators determine whether an AI user is sufficiently aware of automation bias? The study argues that proving the occurrence of automation bias in a legal context is problematic, as bias is often an unconscious cognitive process.
Unlike discrimination laws, where statistical analysis can demonstrate bias in decision-making patterns, automation bias operates on an individual level and is challenging to quantify. The legal system would need to rely on expert testimony, psychological assessments, and extensive record-keeping to establish whether AB has influenced a decision. This complexity raises concerns about the practicality of enforcing AB-related regulations under the AIA.
Future directions: Strengthening AI oversight standards
To address these challenges, the study suggests that harmonized AI oversight standards should explicitly incorporate empirical research on automation bias. Rather than relying solely on providers to embed AB awareness mechanisms, regulators should consider broader structural interventions. This includes mandatory training for AI deployers, standardized evaluation protocols for AI-assisted decisions, and greater scrutiny of how AI-generated outputs influence human decision-making. Additionally, integrating insights from behavioral psychology into AI regulation can help create more effective safeguards. The relationship between the AIA and the General Data Protection Regulation (GDPR) also warrants further exploration, particularly in ensuring that human oversight mechanisms align with broader data protection and ethical AI principles.
As AI continues to shape critical decision-making processes, addressing automation bias remains a pressing challenge. While the AIA represents a significant step in AI governance, the study underscores the need for a more nuanced regulatory approach - one that not only mandates awareness but also implements tangible strategies to counteract the effects of automation bias in human oversight.
- FIRST PUBLISHED IN:
- Devdiscourse

