How visible AI thinking shapes human trust in chatbots
Efforts to make artificial intelligence (AI) more transparent may be introducing new psychological risks alongside their benefits. A new academic study suggests that visible thinking features in chatbots, brief messages explaining a system’s intentions before it replies, can raise user expectations, amplify trust, and alter emotional engagement in ways designers may not fully anticipate.
The research, titled Watching AI Think: User Perceptions of Visible Thinking in Chatbots and published on arXiv, examines how these design choices affect user perceptions in sensitive, real-world contexts.
Visible thinking as a social signal, not a neutral feature
The research team designed an experiment to isolate the impact of visible thinking from the quality of the chatbot’s advice itself. Participants interacted with a chatbot in controlled scenarios where the final suggestions were identical, but the presence and framing of the chatbot’s “thinking” varied. In one condition, the chatbot showed no thinking at all. In two others, it displayed short, value-oriented reflections before responding, either emphasizing emotional support or professional expertise.
Participants were asked to discuss two kinds of personal challenges. One set focused on habit-related issues such as sleep routines, diet, or exercise. The other involved feelings-related situations including stress, guilt, or anxiety. This distinction allowed the researchers to examine whether visible thinking interacts differently with practical self-improvement versus emotional disclosure.
What emerged was a consistent pattern. When chatbots showed emotionally supportive thinking, users were more likely to perceive them as warm, caring, and empathic. The system appeared to take the user seriously, to pause and consider emotional needs before responding. In contrast, expertise-oriented thinking shifted perceptions in a different direction. Users were more inclined to see the chatbot as competent, logical, and trustworthy, particularly when seeking advice grounded in knowledge or evidence.
These shifts occurred even though the actual advice did not change. The only difference was the brief thinking statement shown beforehand. This indicates that visible thinking functions less as an explanation of reasoning and more as a form of self-presentation. It signals effort, intention, and character in much the same way human conversational cues do.
The absence of visible thinking also sent a message. Participants interacting with a chatbot that offered no such cues often described the experience as flat or impersonal. Even when the advice was sensible, users were more likely to interpret the response as generic, low-effort, or detached from their specific situation. For many, the lack of visible thinking reduced motivation to follow the chatbot’s suggestions.
Empathy, expertise, and the risk of raised expectations
While emotionally supportive thinking boosted perceptions of empathy, it also introduced a new challenge. By explicitly framing itself as caring and nonjudgmental, the chatbot raised expectations about the depth and personalization of its eventual response. When the final advice did not fully match the emotional tone implied by the thinking statement, some users experienced disappointment.
This mismatch highlights a key risk in current chatbot design. Visible thinking can prime users to expect more than the system delivers, especially in sensitive contexts. In habit-focused conversations, some participants found emotionally supportive framing excessive or unnecessary. In feelings-related discussions, expertise-oriented thinking, while increasing trust, sometimes made the chatbot feel distant or unsuitable for emotional support.
The study shows that there is no single optimal style of visible thinking. Instead, its effectiveness depends on alignment between the chatbot’s framing, the user’s goals, and the emotional context of the interaction. Emotional framing can foster openness and comfort, but it can also feel artificial or patronizing to users who view chatbots primarily as tools. Expertise framing can strengthen confidence in advice, yet it may discourage emotional disclosure or leave users feeling unheard.
Notably, visible thinking also influenced how users judged effort and time. The brief delay during which the chatbot appeared to think was often interpreted as a sign of care or diligence. Emotionally supportive thinking made the pause feel like consideration of the user’s feelings. Expertise-oriented thinking made it feel like careful analysis. In both cases, the pause itself contributed to perceptions of quality, countering the assumption that faster responses are always better.
These effects mirror broader findings in human communication, where pauses, framing, and tone shape trust as much as content. The study demonstrates that conversational AI now operates firmly within this social territory, whether designers intend it or not.
Implications for AI design and policy
Visible thinking shapes not only user satisfaction but also reliance, compliance, and emotional engagement. In contexts where advice carries real consequences, misaligned expectations can have serious implications.
The findings suggest that transparency in AI should not be treated as a purely technical challenge. Showing a system’s intentions or values is a communicative act that carries social and ethical weight. Designers must consider when and how visible thinking is appropriate, and whether it should adapt dynamically to user needs and conversational context.
The study also raises questions for regulators and organizations deploying AI at scale. If visible thinking can increase trust and willingness to follow advice, it may amplify the influence of systems whose underlying capabilities remain limited. Conversely, poorly calibrated thinking displays may erode trust or create emotional dependence without delivering meaningful support.
The researchers argue that future conversational agents should avoid one-size-fits-all approaches. Personalization, contextual awareness, and careful calibration of tone may help balance warmth and competence without triggering expectancy violations. Equally important is transparency about what visible thinking represents and what it does not. These statements are not windows into machine cognition but design choices that shape perception.
Ultimately, the study underscores a shift in how AI systems are evaluated. Users do not judge chatbots solely on correctness or efficiency. They respond to signals of intention, effort, and care, even when those signals are algorithmically generated. As visible thinking becomes more common, understanding its psychological and social impact will be essential.
In revealing how a few lines of text can transform trust, empathy, and engagement, the research makes clear that the future of conversational AI will be shaped as much by communication design as by technical performance.
- FIRST PUBLISHED IN:
- Devdiscourse

