Public trust in AI decision-making falls short of human judgment

While the survey found broad skepticism about AI, it also revealed that distrust is not uniformly distributed across domains. AI-assisted transportation decisions, for example, autonomous driving features, elicited less distrust than its application in hiring or healthcare. The study suggests that domain-specific characteristics and societal norms play a role in shaping public perception. In high-stakes domains where decisions directly impact human welfare, skepticism toward non-human judgment is more pronounced.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 02-04-2025 10:00 IST | Created: 02-04-2025 10:00 IST
Public trust in AI decision-making falls short of human judgment
Representative Image. Credit: ChatGPT

Despite the widespread integration of artificial intelligence in decision-making systems across healthcare, finance, transportation, and employment, a new survey experiment has found that public trust in AI-assisted decisions remains significantly lower than in traditional human-led processes. The study, titled “Trust in Artificial Intelligence: A Survey Experiment to Assess Trust in Algorithmic Decision-Making” and published in AI & Society, presents one of the most comprehensive investigations into the social perception of AI-based decision-making systems to date.

Conducted in Hungary with a nationally representative sample of 2,100 respondents, the study used a randomized control experiment to compare public trust in four different scenarios - medical diagnoses, hiring, car purchases, and financial investment decisions - with and without the assistance of AI-based automated decision-making (ADM) systems. While many global policymakers and institutions advocate for increased reliance on algorithmic tools for efficiency and objectivity, the findings suggest the public is not yet ready to cede full trust to machines.

In three out of the four domains tested, participants expressed significantly lower levels of trust when AI was involved, even in an assistive, non-autonomous role. Trust in human-led decisions outperformed AI-assisted decisions in the domains of healthcare, hiring, and transportation. Only in financial investment decisions did trust remain comparable between AI-supported and human-only scenarios.

The researchers tested a range of variables that could moderate public attitudes, including age, gender, education level, income, place of residence, political orientation, religiosity, institutional trust, AI knowledge, privacy concerns, and personality traits. While many demographic factors were hypothesized to influence trust, the study found no significant moderating effects for gender, age, education, financial status, political orientation, religiosity, or institutional trust.

Instead, three factors stood out: familiarity with AI, privacy attitudes, and personality. People who reported a good understanding of AI were significantly more likely to trust its involvement, particularly in the medical and financial investment scenarios. Those who had fewer concerns about data privacy also demonstrated a greater willingness to trust ADM. Additionally, individuals with high scores in the personality trait “openness to experience” were more likely to express trust in AI-assisted financial decisions.

Increasing public knowledge about artificial intelligence could play a crucial role in boosting trust. When people understand how AI works, they may focus more on its advantages rather than its risks.

Conversely, participants with high privacy concerns were less trusting of AI-based systems, particularly in health and finance, highlighting the continued sensitivity around data security and the perceived invasiveness of algorithmic technologies. The findings also support prior literature that emphasizes the psychological dimension of trust in automation, such as discomfort with opaque systems and concerns about accountability and fairness.

While the survey found broad skepticism about AI, it also revealed that distrust is not uniformly distributed across domains. AI-assisted transportation decisions, for example, autonomous driving features, elicited less distrust than its application in hiring or healthcare. The study suggests that domain-specific characteristics and societal norms play a role in shaping public perception. In high-stakes domains where decisions directly impact human welfare, skepticism toward non-human judgment is more pronounced.

This variation indicates that trust in AI is not monolithic. People evaluate risk and fairness differently depending on the context. In scenarios like financial advising or semi-autonomous vehicles, AI may be seen as helpful or even superior, but not when a human life or livelihood is at stake.

The study has timely implications for governments and industries across Europe, particularly as the European Union’s AI Act, enacted in 2024, moves toward regulating high-risk AI systems in employment, finance, and healthcare. Under the new legislation, systems classified as high-risk must meet stringent transparency, accountability, and human oversight requirements.

Researchers suggest that addressing the trust deficit will require more than technical compliance. Public communication, education, and transparency about how ADM systems function will be crucial. Developers and policymakers may need to reframe ADM not as a replacement for human judgment but as a complement to it, emphasizing hybrid models that preserve human oversight.

The study also points to a future where attitudes toward AI could evolve as people gain more direct experience with these systems. As AI technologies continue to spread and become part of everyday interactions, from voice assistants to digital health apps, exposure and familiarity may gradually close the trust gap.

Furthermore, the authors caution against overgeneralizing the findings outside Hungary without further comparative research. Hungary’s post-socialist context, relatively low ADM adoption, and socio-political history may influence public attitudes in ways that differ from those in Western Europe or the United States. However, the core insight - that people generally trust human decision-making more than machine-supported alternatives - is likely to hold relevance across diverse cultures and settings.

Future studies may explore trust dynamics in other emerging ADM contexts, such as education, criminal justice, and climate policy. The authors also urge deeper investigation into how different levels of automation and design features like transparency and explainability can influence public confidence in artificial intelligence.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback