When AI disagrees, users change their minds nearly half the time

The study examined how often participants changed their mind after seeing the AI’s output. When the AI agreed with their initial choice, participants rarely engaged deeply with the explanation, often skimming or ignoring it altogether. On the other hand, when the AI’s prediction disagreed with their first decision, participants switched to the AI’s answer 42.9% of the time.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 14-08-2025 09:46 IST | Created: 14-08-2025 09:46 IST
When AI disagrees, users change their minds nearly half the time
Representative Image. Credit: ChatGPT

A team of researchers has uncovered how people interact with artificial intelligence (AI) explanations in decision-support systems, revealing that while explanations can sway final decisions, many users fail to read them thoroughly. The findings highlight both the promise and the pitfalls of explainable AI (XAI) in high-stakes decision-making environments.

The study titled “Can AI Explanations Make You Change Your Mind?” examines whether the presence, format, and context of AI explanations affect a user’s willingness to change their initial decision when faced with AI recommendations. Through a controlled online experiment, the authors explored not only whether people switched to align with AI outputs, but also the cognitive effort they invested in reviewing explanations and how prior AI experience shaped their responses.

How explanations were tested in a decision-making workflow

The study’s experimental setup simulated a real-world decision-support workflow, split into two stages. Participants were tasked with predicting whether a student from a Portuguese university dataset would graduate or drop out. In the first stage, participants made an initial prediction unaided. In the second, they were shown the AI’s prediction, along with an optional explanation of how the AI reached that conclusion, before making a final decision.

To capture how explanation style affects user engagement, the researchers tested three formats conveying local feature importance, highlighting, bar chart, and text, with the underlying values computed using SHAP (SHapley Additive exPlanations). Timing data was recorded for both the first and second decisions to gauge how long participants spent deliberating with and without explanations.

The results showed marked differences in engagement across formats. On average, providing an explanation increased second-decision deliberation time significantly compared with no explanation. Highlighting nearly doubled decision time, bar charts more than tripled it, and text explanations extended it roughly fourfold. These differences suggest that while explanations do draw more attention, the level of engagement depends heavily on presentation style.

When AI disagrees, users often switch but context matters

The study examined how often participants changed their mind after seeing the AI’s output. When the AI agreed with their initial choice, participants rarely engaged deeply with the explanation, often skimming or ignoring it altogether. On the other hand, when the AI’s prediction disagreed with their first decision, participants switched to the AI’s answer 42.9% of the time.

The researchers also identified important behavioural patterns tied to decision timing. Doubling the time spent on the initial decision reduced the likelihood of switching by about 18%, suggesting that stronger initial conviction reduces susceptibility to AI persuasion. Conversely, doubling the time spent on the second decision increased the likelihood of switching by around 13%, indicating that more deliberation with AI input can lead to greater acceptance of its recommendations.

Explanation format further influenced switching rates. Both bar chart and text explanations significantly increased the odds of switching compared with no explanation, by factors of approximately 4.9 and 6.5 respectively. Highlighting, while boosting deliberation time, had a less pronounced effect on actual decision changes.

The role of prior AI experience and the risk of overtrust

The study examined how prior AI experience shaped user behaviour. Results showed that people with previous exposure to AI systems were more responsive to explanations, particularly textual ones, and more likely to adjust their decisions in light of AI input. However, this responsiveness carried a double edge, for those without prior AI experience, text explanations often boosted both warranted trust (switching when the AI was correct) and overtrust (switching when the AI was wrong) in equal measure.

This finding underscores a persistent challenge in explainable AI: ensuring that explanations improve decision quality rather than simply increasing compliance with AI outputs. The authors note that participants’ accuracy tended to improve in parallel with the AI’s performance, but this did not necessarily indicate that users were critically evaluating the explanations. In many cases, behavioural evidence suggested that explanations were skimmed or processed superficially, especially when the AI confirmed the user’s initial choice.

The study’s results point to the need for careful design of explanation interfaces that can encourage deeper cognitive engagement without overloading the user. The balance is delicate, explanations must be interpretable enough to guide users toward better outcomes but also structured to promote healthy scepticism rather than blind acceptance.

Implications for explainable AI in practice

The findings offer critical guidance for developers and policymakers deploying AI in decision-support contexts such as healthcare, finance, and public administration. First, explanation format matters not just for transparency, but for its persuasive impact. Bar charts and text descriptions proved more influential in prompting decision changes, suggesting these formats may be more effective when user reconsideration is desirable.

Second, user context, particularly prior AI experience, should be factored into system design. Training and onboarding processes may be essential to help less experienced users interpret explanations critically, avoiding patterns of overtrust.

The study also highlights a gap between the intended and actual use of AI explanations. Even when explanations are available, many users do not fully engage with them unless prompted by disagreement. This behaviour suggests that simply adding explanations to AI interfaces may not be enough to ensure informed human-AI collaboration; systems may need interactive or adaptive explanation delivery to encourage deeper review when it matters most.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback