Why AI fails without trust: Evidence from organizational decision support systems

The research shows that trust itself is not abstract or emotional. It is built primarily on users’ assessment of data transparency and data quality. When users believe that the data feeding AI systems is accurate, complete, timely, and ethically managed, trust increases sharply. When transparency is lacking, trust erodes, even if the system produces technically sound recommendations.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 16-01-2026 17:51 IST | Created: 16-01-2026 17:51 IST
Why AI fails without trust: Evidence from organizational decision support systems
Representative Image. Credit: ChatGPT

New research shows that the real barrier to effective AI use is not computing power or algorithms - it is trust. Organizations only gain measurable decision-making benefits from AI-based decision support systems when users trust the data behind the recommendations and understand how those recommendations are produced. Without that trust, adoption stalls and efficiency gains disappear.

The findings are detailed in the study Decision-Making in Complex Systems Using AI-Based Decision Support: The Role of Trust, Transparency, and Data Quality, published in the journal Electronics

Trust emerges as the key condition for AI-driven decisions

Why do some organizations see clear improvements in decision speed and accuracy after adopting AI systems, while others struggle to translate AI investments into real outcomes?

To answer this, the researchers surveyed 324 professionals and managers working in IT, industry, services, finance, healthcare, education, retail, and public administration. All respondents had direct exposure to AI-supported decision-making tools such as AI-driven analytics platforms, intelligent enterprise systems, or predictive decision support software. Using advanced statistical modeling, the authors tested how different psychological and organizational factors interact to shape AI adoption and decision efficiency.

The results point to trust as the central enabling factor. Trust in AI-based decision support systems directly influences whether users perceive these tools as useful and easy to work with. Those perceptions, in turn, determine whether employees intend to adopt AI systems in their daily decision-making routines. Adoption intention then becomes the strongest predictor of improved decision-making efficiency.

In other words, AI does not improve decisions simply by existing. It improves decisions only when users believe the system is reliable, fair, and aligned with organizational goals.

The research shows that trust itself is not abstract or emotional. It is built primarily on users’ assessment of data transparency and data quality. When users believe that the data feeding AI systems is accurate, complete, timely, and ethically managed, trust increases sharply. When transparency is lacking, trust erodes, even if the system produces technically sound recommendations.

This finding directly challenges the assumption that AI adoption is mainly a technical rollout problem. The study demonstrates that AI acceptance is a cognitive and organizational process shaped by how information is presented, governed, and explained to users.

Transparency and data quality drive adoption, not automation alone

While previous studies have focuses on algorithm performance or automation benefits, this study shows that users care most about the integrity and clarity of the data pipeline.

Respondents reported higher trust in AI systems when they understood where the data came from, how it was processed, and whether ethical safeguards were in place. Transparency around data sources, processing logic, and governance practices reduced uncertainty and made AI recommendations easier to accept, even when users could not fully interpret complex algorithms.

The study finds that data transparency and quality have the strongest statistical effect on trust among all tested variables. This effect is stronger than the influence of perceived usefulness or ease of use alone. In practical terms, organizations that invest in high-quality data management and explainable data practices create conditions where AI systems are more likely to be embraced rather than resisted.

Perceived ease of use also plays a critical role, but the research shows it is closely tied to trust. Users who trust AI systems are more likely to perceive them as easy to use. This relationship suggests that usability is not only about interface design. It is also about confidence in system outputs. When users trust the underlying data and processes, interaction with AI systems feels less complex and less risky.

The research further confirms that perceived usefulness remains an important adoption driver. Users are more willing to adopt AI systems when they believe these tools help them make faster, more accurate, and better-informed decisions. However, usefulness alone is not enough. Without trust and transparency, perceived usefulness does not translate into sustained adoption.

This has direct implications for organizations operating in high-pressure decision environments. In sectors such as finance, public administration, logistics, and healthcare, decision-makers are accountable for outcomes. The study suggests that AI systems will only be integrated into critical decision workflows when users feel confident that the data and recommendations can withstand scrutiny.

Decision efficiency improves only when AI is trusted and accepted

The final insight concerns decision-making efficiency. The researchers define efficiency not just as speed, but as a combination of faster decisions, reduced errors, greater consistency, and improved ability to evaluate complex alternatives.

The analysis shows that intention to adopt AI-based decision support systems has a strong and direct effect on decision efficiency. Organizations where users intend to rely on AI tools report clearer gains in decision performance. These gains include reduced uncertainty, improved predictive insight, and better alignment between data and managerial judgment.

Crucially, adoption intention acts as a bridge between perception and performance. Trust, ease of use, and perceived usefulness all feed into adoption intention, but none of them directly improves decision efficiency on their own. This finding reinforces the idea that AI benefits only materialize when systems are actively used and embedded into decision routines.

While trust is necessary, excessive or uncritical trust in AI systems carries risks. Overreliance on automated recommendations can lead to automation bias, where users accept AI outputs without sufficient judgment or contextual evaluation. The authors emphasize the need for balanced human–AI collaboration, where AI systems support rather than replace managerial reasoning.

This balance depends on transparent system design, user training, and governance mechanisms that keep humans accountable for final decisions. AI systems that function as cognitive partners rather than decision substitutes are more likely to enhance performance without undermining professional autonomy.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback