AI adoption accelerates when people lose faith in human judgment

The rise of AI in decision-making should not be interpreted as a sign of deep confidence in the technology. Instead, it reflects a vacuum created by declining trust in traditional social structures. This trust gap may grow as AI tools become normal parts of everyday life, potentially encouraging users to bypass human advice in favor of perceived neutrality.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 29-11-2025 10:46 IST | Created: 29-11-2025 10:46 IST
AI adoption accelerates when people lose faith in human judgment
Representative Image. Credit: ChatGPT

A new academic analysis suggests that public reliance on artificial intelligence (AI) may be driven less by faith in technology and more by a growing sense of distrust in human judgment. The study, “Trust in AI emerges from distrust in humans: A machine learning study on decision-making guidance,” examines why people increasingly turn to AI chatbots for guidance across factual, emotional, and moral decisions. The research explores how users distribute their trust between artificial and human agents, and how that trust shifts depending on the type of dilemma they encounter.

The findings signal a major shift in how people evaluate sources of authority. The authors argue that AI is gaining legitimacy not because it is deeply vetted or widely understood, but because many people view human advisers as biased, unreliable, or insufficiently objective. The concept underlying this trend, described in the study as deferred trust, shows that confidence in AI rises when confidence in human helpers declines. This shift may carry significant implications for public understanding, critical thinking, and democratic decision-making, particularly as AI tools become more integrated into everyday problem solving.

AI gains ground as human advisers lose public confidence

The researchers conducted an experimental design involving 55 Colombian undergraduate students who were presented with 30 decision-making scenarios. Each scenario required the participant to select one of five possible guides: an AI chatbot, a voice assistant, a peer, an adult, or a priest. The researchers analyzed which option participants preferred for factual information, emotional support, or moral dilemmas. Across the full set of tasks, adults were chosen most frequently, accounting for slightly more than one-third of selections. AI systems followed closely, surpassing peers, priests, and voice assistants.

The data reveals important distinctions in how trust is allocated. AI was preferred in scenarios that required knowledge, factual clarity, or structured reasoning. Human advisers were favored in situations connected to interpersonal sensitivity, moral judgment, or emotional weight. This suggests that users differentiate between functional accuracy and social understanding, assigning AI a prominent role when precision seems more important than empathy.

However, the study’s machine learning analysis shows that the rise of AI as a trusted agent is strongly shaped by a decline in confidence toward human figures. Using XGBoost classifiers with SHAP explanations, the authors found that lower trust in adults, priests, or peers significantly increased the likelihood of choosing AI for decision support. This effect held across demographic and behavioral segments, indicating a consistent pattern in how trust is redistributed between human and artificial guidance.

The scenarios were deliberately varied to test multiple aspects of decision-making. Some questions focused on technical knowledge. Others addressed emotional challenges or ethical conflicts. Across these categories, participants showed that AI earns greater trust when the expected task is rooted in objective knowledge rather than interpersonal experience. This reinforces a broader trend identified in global surveys: people often see AI systems as neutral or impartial, even when they do not fully understand how these systems operate.

The researchers also observed that some participants appeared to use AI as a form of escape from human judgment. Users who reported discomfort seeking help from peers or adults were more inclined to rely on AI tools. This dynamic highlights the social role that AI can play, filling a gap for individuals who fear judgment, bias, or conflict when interacting with other people. While this may enhance comfort in the short term, the study warns that it may weaken interpersonal trust in the long term.

Machine learning models reveal predictors of trust in AI

To better understand why certain individuals placed more trust in AI systems, the authors incorporated machine learning tools that could classify decision patterns and identify the strongest predictors of AI reliance. The models achieved high performance, with average precision scores approaching 0.88. This allowed the researchers to explore which personal characteristics and behavioral patterns consistently led to choosing AI as the preferred guide.

The study found that individuals who displayed low trust in human authority figures were significantly more likely to rely on AI. This relationship held even when controlling for age, socioeconomic background, and other demographic factors. The study argues that this form of trust redistribution is not fully captured by traditional technology-acceptance models, which emphasize usefulness, ease of use, and perceived reliability. Instead, the findings show that trust in AI operates on a relational dimension shaped by people’s existing skepticism toward social institutions.

Another finding concerns socio-demographic patterns. Participants from higher socioeconomic backgrounds displayed stronger trust in AI systems than peers from lower-income groups. The authors suggest that this may relate to differing exposure to technology, educational expectations, or past interactions with human authority figures. The study also found that individuals with lower general technology use were sometimes more willing to defer to AI, an unexpected trend suggesting that those who lack deep technological literacy may rely more on surface-level fluency in AI responses.

Confident and polished answers generated by AI tools also played a role in shaping user trust. Participants who were drawn to AI outputs often highlighted the smooth, structured way information was presented. The study finds that this polished delivery can mask the underlying risks posed by AI’s tendency to produce confident errors. This is particularly important because users with weaker critical evaluation skills may interpret fluent responses as evidence of accuracy, reducing their vigilance toward misleading or incorrect information.

Age also influenced trust patterns. Older students in the sample tended to show greater skepticism toward AI recommendations, while younger participants were more receptive to AI guidance. Those with more digital experience demonstrated a more balanced approach to selecting between human and artificial advisers, suggesting that technology familiarity can encourage critical distance rather than unquestioning trust.

Rising dependence on AI may increase risk of overreliance and reduced critical thinking

The concept of epistemic vigilance, which refers to the ability to evaluate the credibility of information and its sources, plays a significant role in the authors’ warning. The study argues that the smooth communication style of AI systems can reduce this vigilance, particularly for people already skeptical of human advisers. As a result, users may over-trust AI even when systems generate incorrect or oversimplified responses.

The rise of AI in decision-making should not be interpreted as a sign of deep confidence in the technology, the study asserts. Instead, it reflects a vacuum created by declining trust in traditional social structures. This trust gap may grow as AI tools become normal parts of everyday life, potentially encouraging users to bypass human advice in favor of perceived neutrality.

The study also suggests that AI systems could reshape interpersonal dynamics. If people grow accustomed to seeking guidance from AI for sensitive or morally complex decisions, they may invest less effort in building supportive relationships with other people. This poses risks for emotional resilience, social cohesion, and civic engagement, especially among groups that already feel disconnected from traditional authority.

The study proposes expanding research to more diverse populations to explore how cultural, economic, and institutional factors shape trust patterns.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback