Misaligned transparency and cognitive overload are major barriers to safe AI trust
Create a photorealistic image that accurately resonates with the study; the image should not have any text; do not make any robot not even humanoids
Artificial intelligence is integrating into daily work, public services, and security systems, yet the public’s ability to trust these tools remains uneven and often poorly understood. A new study published in Frontiers in Computer Science, titled “Explicating the trust process for effective human interaction with artificial intelligence and machine learning systems,” brings this problem into sharp focus.
While developers continue improving model accuracy, the authors argue that technical strength alone does not secure trust. The crucial missing piece is a deeper understanding of the human user, whose attitudes, personality traits, cognitive habits, and expectations shape whether they rely on or reject machine guidance. Their work pushes the conversation toward the psychological layer of AI adoption, a layer that must be understood before societies can safely and responsibly expand AI deployments.
The paper notes that AI and machine learning applications now influence broad sectors of daily life, including transportation, criminal justice recommendations, route planning, and routine work tasks. These systems help reduce human workload and improve speed and accuracy, but they also introduce risks when users misjudge when to rely on them. The researchers state that historically, the field has focused on how to improve AI’s performance rather than how humans interact with these systems. This has left major questions unanswered about what drives appropriate trust, inappropriate trust, or outright refusal to use AI tools.
In response, the authors bring together insights from social psychology, computer science, information science, and human factors research to create a unified model of how trust toward AI forms. Their framework builds on decades of trust studies from human-machine interactions but adapts them for a world where algorithms now handle complex decisions, large data sets, and real-time predictions. A key advance of the model is its clear separation of the human user from the machine referent. This allows the study to map each step of the cognitive process that leads someone to comply with or reject an AI system’s output.
A major component of this model is transparency. The authors argue that transparency is no longer a helpful extra feature but a central part of building trust. It affects how much information a user can process about a system’s operations, goals, confidence level, and logic. When transparency is too low or too complex, people may misunderstand what the system can and cannot do. This creates the conditions for either blind trust or unreasonable doubt, both of which carry risks. The study highlights that transparency must be adapted to the user’s cognitive style to be effective. Different individuals process system information in different ways, and trust can only grow when a system presents information in a form that matches the user’s needs.
The research also examines the role of personal differences in shaping trust. Factors such as personality traits, attitudes toward automation, risk tolerance, prior experience with technology, and even basic cognitive tendencies all have measurable effects on how people judge AI systems. Some individuals have a natural readiness to trust machine recommendations, while others are more cautious. These tendencies are not fixed, however. They shift depending on context, task complexity, and system behavior. The authors explain that these individual differences interact with system characteristics and situational pressures, forming a dynamic trust environment that can change rapidly.
The authors draw on established psychological pathways that describe how humans evaluate information through either quick, surface-level thinking or deeper, more deliberate reasoning. When a system provides cues that appear familiar or simple, users often rely on fast thinking and assume the machine output is correct. When tasks are more complex or when system behavior appears uncertain, users may switch to more effortful thinking, evaluating details and evidence. Both routes can lead to trust or distrust, depending on how the system communicates. This means developers must pay close attention to how their systems present information, since even small changes in format can shift the user’s trust level.
The study discusses how AI tools can either support or hinder trust depending on the context of their use. For example, tasks that involve safety, health, or major financial outcomes demand higher levels of trust accuracy. Overreliance can produce harm if the system makes an error and the human fails to question it. Underreliance, on the other hand, results in humans ignoring helpful machine guidance, which also reduces performance. The authors stress that the goal is not simple trust but appropriate trust, which requires systems that are both understandable and well aligned with user expectations.
The paper also explores how explainable AI plays into this process. Explainability tools aim to reveal how a system reached its conclusions, but the study notes that explainability alone does not guarantee trust. Explanations must be meaningful, tailored, clear, and useful to the user’s mental model. Explanations that are too technical or too vague can weaken trust rather than strengthen it. The authors warn that when explainability creates a false sense of understanding, it can even encourage unhealthy levels of reliance, especially in critical tasks.
The study also identifies important gaps that future research must address. The authors acknowledge that their framework does not fully cover cultural, social, and group-level influences on trust. Different cultures have different norms around automation, authority, and decision-making, which may shift how they respond to AI systems. Group dynamics also influence trust, especially as AI tools become integrated into team environments. The researchers point out that these areas require deeper study, especially for global AI deployment that spans regions with diverse expectations.
To sum up, the authors argue that as AI systems continue to expand into high-stakes fields over the next decade, the research community must prioritize understanding and improving the trust process. They suggest that their model can guide the next phase of experiments and system designs, pushing developers to test how real users respond under conditions that resemble real-world scenarios.
Without a strong understanding of human psychology, even the most advanced systems may fail to gain appropriate trust. Poorly calibrated trust can lead to harmful outcomes, public resistance, or inefficient use of powerful tools.
- READ MORE ON:
- AI trust
- human–AI interaction
- machine learning trust
- explainable AI
- algorithmic transparency
- cognitive processing AI
- user trust in technology
- AI adoption factors
- human factors in AI
- psychological trust model
- AI decision support
- automation trust
- AI reliability
- ML system behavior
- human cognition and AI
- FIRST PUBLISHED IN:
- Devdiscourse

