Overriding or Overrelying? The hidden risks of AI-assisted decision-making

AI has been widely embraced in decision-making, particularly in high-stakes domains like finance, criminal justice, and medical diagnosis. The assumption behind these AI-assisted systems is simple: humans will act as the final checkpoint, ensuring that AI errors are caught and corrected. Yet, reality often deviates from this ideal.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 12-02-2025 17:07 IST | Created: 12-02-2025 17:07 IST
Overriding or Overrelying? The hidden risks of AI-assisted decision-making
Representative Image. Credit: ChatGPT

In an era where artificial intelligence (AI) is reshaping decision-making, from hiring to healthcare, there remains a paradox: humans, rather than correcting AI mistakes, often misplace their trust—either over-relying on AI's flawed suggestions or dismissing its correct recommendations.

A study, published in the Journal of Artificial Intelligence Research (2025), dives into the intricate relationship between AI reliance and decision quality. Titled "AI Reliance and Decision Quality: Fundamentals, Interdependence, and the Effects of Interventions", the research by Jakob Schoeffer, Johannes Jakubik, Michael Vössing, Niklas Kühl, and Gerhard Satzger sheds light on a critical yet often misunderstood issue - when and how AI truly improves human decision-making.

The complexity of human-AI collaboration

AI has been widely embraced in decision-making, particularly in high-stakes domains like finance, criminal justice, and medical diagnosis. The assumption behind these AI-assisted systems is simple: humans will act as the final checkpoint, ensuring that AI errors are caught and corrected. Yet, reality often deviates from this ideal.

The study emphasizes that human reliance on AI can be categorized into four key behaviors:

  • Correct adherence – following AI recommendations when they are correct.
  • Wrong adherence – following AI recommendations even when they are incorrect.
  • Correct overriding – rejecting incorrect AI recommendations.
  • Wrong overriding – rejecting AI recommendations even when they are correct.

The authors argue that true human-AI complementarity—where human oversight improves decision quality—requires striking a delicate balance: adhering to correct AI suggestions while overriding the incorrect ones. However, empirical studies reveal a persistent issue: many humans are unable to distinguish between correct and incorrect AI recommendations, leading to suboptimal decision-making.

Over-reliance vs. under-reliance: The twin pitfalls

A crucial insight from this research is the distinction between over-reliance and under-reliance. Over-reliance occurs when users accept AI recommendations without question, even when they are flawed. Under-reliance, on the other hand, happens when users distrust AI excessively, rejecting even correct recommendations.

Interestingly, the study finds that over-reliance is more prevalent when AI accuracy is high - people assume the system is always right. Conversely, when AI accuracy is lower, users tend to under-rely, second-guessing even correct AI outputs. The research demonstrates that complementarity is impossible if humans under-rely past a certain threshold, particularly when AI is highly accurate. In contrast, slight over-reliance might still lead to improved decision quality, albeit through sheer probability rather than human discernment.

The study also introduces a visual framework to quantify reliance behaviors, demonstrating how decision quality fluctuates based on AI accuracy and human adherence. This framework provides a new way to evaluate how well humans complement AI, showing that interventions aimed at improving AI reliance should focus not just on increasing or decreasing adherence but on ensuring the quality of reliance.

Can explanations help? The limits of AI transparency

One of the most anticipated solutions to the reliance problem is explainable AI (XAI) - a system where AI provides reasoning behind its recommendations. Many assume that explanations will help users better judge when to trust AI and when to override it. However, the study reveals that this assumption is largely unproven.

Empirical research analyzed in the paper suggests that explanations rarely enhance decision-making accuracy. Instead, they often increase user confidence in AI, leading to even greater over-reliance. In some cases, explanations even backfire by reinforcing pre-existing biases - users may blindly follow AI just because its reasoning seems plausible, even if it is incorrect.

The authors highlight that measuring the impact of explanations solely through accuracy improvements is misleading. Two interventions may appear equally effective in terms of decision accuracy, yet one might drive more appropriate reliance while the other simply increases blind adherence. This study urges researchers to separate reliance behavior from decision quality when assessing AI interventions.

What this means for the future of AI-assisted decision-making

This research offers crucial insights for AI developers, policymakers, and end-users. It suggests that human-AI collaboration is far more complex than just "keeping a human in the loop."

  • Interventions should focus on helping users differentiate between correct and incorrect AI recommendations. Simply increasing adherence or trust in AI is not the answer - users must develop discerning reliance, not blind reliance.
  • Explainability alone is insufficient. AI systems must be designed not just to be transparent but to actively guide users toward appropriate overrides.
  • AI designers should consider thresholds for under-reliance and over-reliance when designing decision-support tools. If AI accuracy is high, users should be nudged toward higher adherence. If AI accuracy is lower, interventions should encourage selective skepticism rather than blanket rejection.

The study concludes that the future of AI-assisted decision-making does not lie in making AI more accurate alone but in aligning human reliance behavior with AI accuracy. Without solving this misalignment, AI will continue to be misused, leading to either excessive automation or ineffective human oversight - both of which could prove disastrous in critical domains.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback